tag
dict
content
listlengths
1
139
{ "category": "Runtime", "file_name": "crossdomain.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "A cross-domain policy file allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. See https://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/xdomain.html for a description of the purpose and structure of the cross-domain policy file. The cross-domain policy file is installed in the root of a web server (i.e., the path is /crossdomain.xml). The crossdomain middleware responds to a path of /crossdomain.xml with an XML document such as: ``` <?xml version=\"1.0\"?> <!DOCTYPE cross-domain-policy SYSTEM \"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd\" > <cross-domain-policy> <allow-access-from domain=\"*\" secure=\"false\" /> </cross-domain-policy> ``` You should use a policy appropriate to your site. The examples and the default policy are provided to indicate how to syntactically construct a cross domain policy file they are not recommendations. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "development_middleware.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "For the most part we try to follow PEP 8 guidelines which can be viewed here: http://www.python.org/dev/peps/pep-0008/ Swift has a comprehensive suite of tests and pep8 checks that are run on all submitted code, and it is recommended that developers execute the tests themselves to catch regressions early. Developers are also expected to keep the test suite up-to-date with any submitted code changes. Swifts tests and pep8 checks can be executed in an isolated environment with tox: http://tox.testrun.org/ To execute the tests: Ensure pip and virtualenv are upgraded to satisfy the version requirements listed in the OpenStack global requirements: ``` pip install pip -U pip install virtualenv -U ``` Install tox: ``` pip install tox ``` Generate list of distribution packages to install for testing: ``` tox -e bindep ``` Now install these packages using your distribution package manager like apt-get, dnf, yum, or zypper. Run tox from the root of the swift repo: ``` tox ``` To run a selected subset of unit tests with pytest: Create a virtual environment with tox: ``` tox devenv -e py3 .env ``` Note Alternatively, here are the steps of manual preparation of the virtual environment: ``` virtualenv .env source .env/bin/activate pip3 install -r requirements.txt -r test-requirements.txt -c py36-constraints.txt pip3 install -e . deactivate ``` Activate the virtual environment: ``` source .env/bin/activate ``` Run some unit tests, for example: ``` pytest test/unit/common/middleware/crypto ``` Run all unit tests: ``` pytest test/unit ``` Note If you installed using cd ~/swift; sudo python setup.py develop, you may need to do cd ~/swift; sudo chown -R ${USER}:${USER} swift.egg-info prior to running tox. By default tox will run all of the unit test and pep8 checks listed in the tox.ini file envlist option. A subset of the test environments can be specified on the tox command line or by setting the TOXENV environment variable. For example, to run only the pep8 checks and python2.7 unit tests use: ``` tox -e pep8,py27 ``` or: ``` TOXENV=py27,pep8 tox ``` To run unit tests with python3.8: ``` tox -e py38 ``` Note As of tox version 2.0.0, most environment variables are not automatically passed to the test environment. Swifts tox.ini overrides this default behavior so that variable names matching SWIFT* and *proxy will be passed, but you may need to run tox --recreate for this to take effect after upgrading from tox <2.0.0. Conversely, if you do not want those environment variables to be passed to the test environment then you will need to unset them before calling tox. Also, if you ever encounter DistributionNotFound, try to use tox --recreate or remove the .tox directory to force tox to recreate the dependency list. Swifts tests require having an XFS directory available in /tmp or in the TMPDIR environment" }, { "data": "Swifts functional tests may be executed against a SAIO (Swift All In One) or other running Swift cluster using the command: ``` tox -e func ``` The endpoint and authorization credentials to be used by functional tests should be configured in the test.conf file as described in the section Setting up scripts for running Swift. The environment variable SWIFTTESTPOLICY may be set to specify a particular storage policy name that will be used for testing. When set, tests that would otherwise not specify a policy or choose a random policy from those available will instead use the policy specified. Tests that use more than one policy will include the specified policy in the set of policies used. The specified policy must be available on the cluster under test. For example, this command would run the functional tests using policy silver: ``` SWIFTTESTPOLICY=silver tox -e func ``` To run a single functional test, use the --no-discover option together with a path to a specific test method, for example: ``` tox -e func -- --no-discover test.functional.tests.TestFile.testCopy ``` If the test.conf file is not found then the functional test framework will instantiate a set of Swift servers in the same process that executes the functional tests. This in-process test mode may also be enabled (or disabled) by setting the environment variable SWIFTTESTIN_PROCESS to a true (or false) value prior to executing tox -e func. When using the in-process test mode some server configuration options may be set using environment variables: the optional in-memory object server may be selected by setting the environment variable SWIFTTESTINMEMORYOBJ to a true value. encryption may be added to the proxy pipeline by setting the environment variable SWIFTTESTINPROCESSCONF_LOADER to encryption. a 2+1 EC policy may be installed as the default policy by setting the environment variable SWIFTTESTINPROCESSCONF_LOADER to ec. logging to stdout may be enabled by setting SWIFTTESTDEBUG_LOGS. For example, this command would run the in-process mode functional tests with encryption enabled in the proxy-server: ``` SWIFTTESTINPROCESS=1 SWIFTTESTINPROCESSCONFLOADER=encryption \\ tox -e func ``` This particular example may also be run using the func-encryption tox environment: ``` tox -e func-encryption ``` The tox.ini file also specifies test environments for running other in-process functional test configurations, e.g.: ``` tox -e func-ec ``` To debug the functional tests, use the in-process test mode and pass the --pdb flag to tox: ``` SWIFTTESTIN_PROCESS=1 tox -e func -- --pdb \\ test.functional.tests.TestFile.testCopy ``` The in-process test mode searches for proxy-server.conf and swift.conf config files from which it copies config options and overrides some options to suit in process testing. The search will first look for config files in a <customconfsource_dir> that may optionally be specified using the environment variable: ``` SWIFTTESTINPROCESSCONFDIR=<customconfsourcedir> ``` If SWIFTTESTINPROCESSCONF_DIR is not set, or if a config file is not found in <customconfsource_dir>, the search will then look in the etc/ directory in the source tree. If the config file is still not found, the corresponding sample config file from etc/ is used (e.g. proxy-server.conf-sample or" }, { "data": "When using the in-process test mode SWIFTTESTPOLICY may be set to specify a particular storage policy name that will be used for testing as described above. When set, this policy must exist in the swift.conf file and its corresponding ring file must exist in <customconfsource_dir> (if specified) or etc/. The test setup will set the specified policy to be the default and use its ring file properties for constructing the test object ring. This allows in-process testing to be run against various policy types and ring files. For example, this command would run the in-process mode functional tests using config files found in $HOME/my_tests and policy silver: ``` SWIFTTESTINPROCESS=1 SWIFTTESTINPROCESSCONFDIR=$HOME/my_tests \\ SWIFTTESTPOLICY=silver tox -e func ``` The cross-compatibility tests in directory test/s3api are intended to verify that the Swift S3 API behaves in the same way as the AWS S3 API. They should pass when run against either a Swift endpoint (with S3 API enabled) or an AWS S3 endpoint. To run against an AWS S3 endpoint, the /etc/swift/test.conf file must be edited to provide AWS key IDs and secrets. Alternatively, an AWS CLI style credentials file can be loaded by setting the SWIFTTESTAWSCONFIGFILE environment variable, e.g.: ``` SWIFTTESTAWSCONFIGFILE=~/.aws/credentials pytest ./test/s3api ``` Note When using SWIFTTESTAWSCONFIGFILE, the region defaults to us-east-1 and only the default credentials are loaded. Swift uses flake8 with the OpenStack hacking module to enforce coding style. Install flake8 and hacking with pip or by the packages of your Operating System. It is advised to integrate flake8+hacking with your editor to get it automated and not get caught by Jenkins. For example for Vim the syntastic plugin can do this for you. The documentation in docstrings should follow the PEP 257 conventions (as mentioned in the PEP 8 guidelines). More specifically: Triple quotes should be used for all docstrings. If the docstring is simple and fits on one line, then just use one line. For docstrings that take multiple lines, there should be a newline after the opening quotes, and before the closing quotes. Sphinx is used to build documentation, so use the restructured text markup to designate parameters, return values, etc. Documentation on the sphinx specific markup can be found here: https://www.sphinx-doc.org/en/master/ To build documentation run: ``` pip install -r requirements.txt -r doc/requirements.txt sphinx-build -W -b html doc/source doc/build/html ``` and then browse to doc/build/html/index.html. These docs are auto-generated after every commit and available online at https://docs.openstack.org/swift/latest/. For sanity check of your change in manpage, use this command in the root of your Swift repo: ``` ./.manpages ``` You can have the following copyright and license statement at the top of each source file. Copyright assignment is optional. New files should contain the current year. Substantial updates can have another year added, and date ranges are not needed.: ``` ``` Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "development_auth.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Effective code review is a skill like any other professional skill you develop with experience. Effective code review requires trust. No one is perfect. Everyone makes mistakes. Trust builds over time. This document will enumerate behaviors commonly observed and associated with competent reviews of changes purposed to the Swift code base. No one is expected to follow these steps. Guidelines are not rules, not all behaviors will be relevant in all situations. Code review is collaboration, not judgement. Alistair Coles You will need to have a copy of the change in an environment where you can freely edit and experiment with the code in order to provide a non-superficial review. Superficial reviews are not terribly helpful. Always try to be helpful. ;) Check out the change so that you may begin. Commonly, git review -d <change-id> Imagine that you submit a patch to Swift, and a reviewer starts to take a look at it. Your commit message on the patch claims that it fixes a bug or adds a feature, but as soon as the reviewer downloads it locally and tries to test it, a severe and obvious error shows up. Something like a syntax error or a missing dependency. Did you even run this? is the review comment all contributors dread. Reviewers in particular need to be fearful merging changes that just dont work - or at least fail in frequently common enough scenarios to be considered horribly broken. A comment in our review that says roughly I ran this on my machine and observed description of behavior change is supposed to achieve is the most powerful defense we have against the terrible scorn from our fellow Swift developers and operators when we accidentally merge bad code. If youre doing a fair amount of reviews - you will participate in merging a change that will break my clusters - its cool - Ill do it to you at some point too (sorry about that). But when either of us go look at the reviews to understand the process gap that allowed this to happen - it better not be just because we were too lazy to check it out and run it before it got merged. Or be warned, you may receive, the dreaded Did you even run this? Im sorry, I know its rough. ;) Saying that should rarely happen is the same as saying that will happen Douglas Crockford Scale is an amazingly abusive partner. If you contribute changes to Swift your code is running - in production - at scale - and your bugs cannot hide. I wish on all of us that our bugs may be exceptionally rare - meaning they only happen in extremely unlikely edge cases. For example, bad things that happen only 1 out of every 10K times an op is performed will be discovered in minutes. Bad things that happen only 1 out of every one billion times something happens will be observed - by multiple deployments - over the course of a release. Bad things that happen 1/100 times some op is performed are considered horribly broken. Tests must exhaustively exercise possible scenarios. Every system call and network connection will raise an error and timeout - where will that Exception be caught? Yes, I know Gerrit does this already. You can do it" }, { "data": "You might not need to re-run all the tests on your machine - it depends on the change. But, if youre not sure which will be most useful - running all of them best - unit - functional - probe. If you cant reliably get all tests passing in your development environment you will not be able to do effective reviews. Whatever tests/suites you are able to exercise/validate on your machine against your config you should mention in your review comments so that other reviewers might choose to do other testing locally when they have the change checked out. e.g. I went ahead and ran probe/testobjectmetadata_replication.py on my machine with both syncmethod = rsync and syncmethod = ssync - that works for me - but I didnt try it with objectpostas_copy = false Style is an important component to review. The goal is maintainability. However, keep in mind that generally style, readability and maintainability are orthogonal to the suitability of a change for merge. A critical bug fix may be a well written pythonic masterpiece of style - or it may be a hack-y ugly mess that will absolutely need to be cleaned up at some point - but it absolutely should merge because: CRITICAL. BUG. FIX. You should comment inline to praise code that is obvious. You should comment inline to highlight code that you found to be obfuscated. Unfortunately readability is often subjective. We should remember that its probably just our own personal preference. Rather than a comment that says You should use a list comprehension here - rewrite the code as a list comprehension, run the specific tests that hit the relevant section to validate your code is correct, then leave a comment that says: I find this more readable: diff with working tested code If the author (or another reviewer) agrees - its possible the change will get updated to include that improvement before it is merged; or it may happen in a follow-up change. However, remember that style is non-material - it is useful to provide (via diff) suggestions to improve maintainability as part of your review - but if the suggestion is functionally equivalent - it is by definition optional. Read the commit message thoroughly before you begin the review. Commit messages must answer the why and the what for - more so than the how or what it does. Commonly this will take the form of a short description: What is broken - without this change What is impossible to do with Swift - without this change What is slower/worse/harder - without this change If youre not able to discern why a change is being made or how it would be used - you may have to ask for more details before you can successfully review it. Commit messages need to have a high consistent quality. While many things under source control can be fixed and improved in a follow-up change - commit messages are forever. Luckily its easy to fix minor mistakes using the in-line edit feature in Gerrit! If you can avoid ever having to ask someone to change a commit message you will find yourself an amazingly happier and more productive reviewer. Also commit messages should follow the OpenStack Commit Message guidelines, including references to relevant impact tags or bug numbers. You should hand out links to the OpenStack Commit Message guidelines liberally via comments when fixing commit messages during review. Here you go: GitCommitMessages New tests should be added for all code" }, { "data": "Historically you should expect good changes to have a diff line count ratio of at least 2:1 tests to code. Even if a change has to fix a lot of existing tests, if a change does not include any new tests it probably should not merge. If a change includes a good ratio of test changes and adds new tests - you should say so in your review comments. If it does not - you should write some! and offer them to the patch author as a diff indicating to them that something like these tests Im providing as an example will need to be included in this change before it is suitable to merge. Bonus points if you include suggestions for the author as to how they might improve or expand upon the tests stubs you provide. Be very careful about asking an author to add a test for a small change before attempting to do so yourself. Its quite possible there is a lack of existing test infrastructure needed to develop a concise and clear test - the author of a small change may not be the best person to introduce a large amount of new test infrastructure. Also, most of the time remember its harder to write the test than the change - if the author is unable to develop a test for their change on their own you may prevent a useful change from being merged. At a minimum you should suggest a specific unit test that you think they should be able to copy and modify to exercise the behavior in their change. If youre not sure if such a test exists - replace their change with an Exception and run tests until you find one that blows up. Most changes should include documentation. New functions and code should have Docstrings. Tests should obviate new or changed behaviors with descriptive and meaningful phrases. New features should include changes to the documentation tree. New config options should be documented in example configs. The commit message should document the change for the change log. Always point out typos or grammar mistakes when you see them in review, but also consider that if you were able to recognize the intent of the statement - documentation with typos may be easier to iterate and improve on than nothing. If a change does not have adequate documentation it may not be suitable to merge. If a change includes incorrect or misleading documentation or is contrary to existing documentation is probably is not suitable to merge. Every change could have better documentation. Like with tests, a patch isnt done until it has docs. Any patch that adds a new feature, changes behavior, updates configs, or in any other way is different than previous behavior requires docs. manpages, sample configs, docstrings, descriptive prose in the source tree, etc. Reviews have been shown to provide many benefits - one of which is shared ownership. After providing a positive review you should understand how the change works. Doing this will probably require you to play with the change. You might functionally test the change in various scenarios. You may need to write a new unit test to validate the change will degrade gracefully under failure. You might have to write a script to exercise the change under some superficial load. You might have to break the change and validate the new tests fail and provide useful" }, { "data": "You might have to step through some critical section of the code in a debugger to understand when all the possible branches are exercised in tests. When youre done with your review an artifact of your effort will be observable in the piles of code and scripts and diffs you wrote while reviewing. You should make sure to capture those artifacts in a paste or gist and include them in your review comments so that others may reference them. e.g. When I broke the change like this: diff it blew up like this: unit test failure Its not uncommon that a review takes more time than writing a change - hopefully the author also spent as much time as you did validating their change but thats not really in your control. When you provide a positive review you should be sure you understand the change - even seemingly trivial changes will take time to consider the ramifications. Leave. Lots. Of. Comments. A popular web comic has stated that WTFs/Minute is the only valid measurement of code quality. If something initially strikes you as questionable - you should jot down a note so you can loop back around to it. However, because of the distributed nature of authors and reviewers its imperative that you try your best to answer your own questions as part of your review. Do not say Does this blow up if it gets called when xyz - rather try and find a test that specifically covers that condition and mention it in the comment so others can find it more quickly. Or if you can find no such test, add one to demonstrate the failure, and include a diff in a comment. Hopefully you can say I thought this would blow up, so I wrote this test, but it seems fine. But if your initial reaction is I dont understand this or How does this even work? you should notate it and explain whatever you were able to figure out in order to help subsequent reviewers more quickly identify and grok the subtle or complex issues. Because you will be leaving lots of comments - many of which are potentially not highlighting anything specific - it is VERY important to leave a good summary. Your summary should include details of how you reviewed the change. You may include what you liked most, or least. If you are leaving a negative score ideally you should provide clear instructions on how the change could be modified such that it would be suitable for merge - again diffs work best. Scoring is subjective. Try to realize youre making a judgment call. A positive score means you believe Swift would be undeniably better off with this code merged than it would be going one more second without this change running in production immediately. It is indeed high praise - you should be sure. A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks. It is a surprisingly difficult chore to be confident about the value of unproven code or a not well understood use-case in an uncertain world, and unfortunately all too easy with a thorough review to uncover our defects, and be reminded of the risk of" }, { "data": "Reviewers must try very hard first and foremost to keep master stable. If you can demonstrate a change has an incorrect behavior its almost without exception that the change must be revised to fix the defect before merging rather than letting it in and having to also file a bug. Every commit must be deployable to production. Beyond that - almost any change might be merge-able depending on its merits! Here are some tips you might be able to use to find more changes that should merge! Fixing bugs is HUGELY valuable - the only thing which has a higher cost than the value of fixing a bug - is adding a new bug - if its broken and this change makes it fixed (without breaking anything else) you have a winner! Features are INCREDIBLY difficult to justify their value against the cost of increased complexity, lowered maintainability, risk of regression, or new defects. Try to focus on what is impossible without the feature - when you make the impossible possible, things are better. Make things better. Purely test/doc changes, complex refactoring, or mechanical cleanups are quite nuanced because theres less concrete objective value. Ive seen lots of these kind of changes get lost to the backlog. Ive also seen some success where multiple authors have collaborated to push-over a change rather than provide a review ultimately resulting in a quorum of three or more authors who all agree there is a lot of value in the change - however subjective. Because the bar is high - most reviews will end with a negative score. However, for non-material grievances (nits) - you should feel confident in a positive review if the change is otherwise complete correct and undeniably makes Swift better (not perfect, better). If you see something worth fixing you should point it out in review comments, but when applying a score consider if it need be fixed before the change is suitable to merge vs. fixing it in a follow up change? Consider if the change makes Swift so undeniably better and it was deployed in production without making any additional changes would it still be correct and complete? Would releasing the change to production without any additional follow up make it more difficult to maintain and continue to improve Swift? Endeavor to leave a positive or negative score on every change you review. Use your best judgment. Swift Core maintainers may provide positive reviews scores that look different from your reviews - a +2 instead of a +1. But its exactly the same as your +1. It means the change has been thoroughly and positively reviewed. The only reason its different is to help identify changes which have received multiple competent and positive reviews. If you consistently provide competent reviews you run a VERY high risk of being approached to have your future positive review scores changed from a +1 to +2 in order to make it easier to identify changes which need to get merged. Ideally a review from a core maintainer should provide a clear path forward for the patch author. If you dont know how to proceed respond to the reviewers comments on the change and ask for help. Wed love to try and help. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "discoverability.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "This document provides general guidance for deploying and configuring Swift. Detailed descriptions of configuration options can be found in the configuration documentation. Swift is designed to run on commodity hardware. RAID on the storage drives is not required and not recommended. Swifts disk usage pattern is the worst case possible for RAID, and performance degrades very quickly using RAID 5 or 6. The Swift services run completely autonomously, which provides for a lot of flexibility when architecting the hardware deployment for Swift. The 4 main services are: Proxy Services Object Services Container Services Account Services The Proxy Services are more CPU and network I/O intensive. If you are using 10g networking to the proxy, or are terminating SSL traffic at the proxy, greater CPU power will be required. The Object, Container, and Account Services (Storage Services) are more disk and network I/O intensive. The easiest deployment is to install all services on each server. There is nothing wrong with doing this, as it scales each service out horizontally. Alternatively, one set of servers may be dedicated to the Proxy Services and a different set of servers dedicated to the Storage Services. This allows faster networking to be configured to the proxy than the storage servers, and keeps load balancing to the proxies more manageable. Storage Services scale out horizontally as storage servers are added, and the overall API throughput can be scaled by adding more proxies. If you need more throughput to either Account or Container Services, they may each be deployed to their own servers. For example you might use faster (but more expensive) SAS or even SSD drives to get faster disk I/O to the databases. A high-availability (HA) deployment of Swift requires that multiple proxy servers are deployed and requests are load-balanced between them. Each proxy server instance is stateless and able to respond to requests for the entire cluster. Load balancing and network design is left as an exercise to the reader, but this is a very important part of the cluster, so time should be spent designing the network for a Swift cluster. Swift comes with an integral web front end. However, it can also be deployed as a request processor of an Apache2 using mod_wsgi as described in Apache Deployment Guide. The first step is to determine the number of partitions that will be in the ring. We recommend that there be a minimum of 100 partitions per drive to insure even distribution across the drives. A good starting point might be to figure out the maximum number of drives the cluster will contain, and then multiply by 100, and then round up to the nearest power of two. For example, imagine we are building a cluster that will have no more than 5,000 drives. That would mean that we would have a total number of 500,000 partitions, which is pretty close to 2^19, rounded up. It is also a good idea to keep the number of partitions small (relatively). The more partitions there are, the more work that has to be done by the replicators and other backend jobs and the more memory the rings consume in" }, { "data": "The goal is to find a good balance between small rings and maximum cluster size. The next step is to determine the number of replicas to store of the data. Currently it is recommended to use 3 (as this is the only value that has been tested). The higher the number, the more storage that is used but the less likely you are to lose data. It is also important to determine how many zones the cluster should have. It is recommended to start with a minimum of 5 zones. You can start with fewer, but our testing has shown that having at least five zones is optimal when failures occur. We also recommend trying to configure the zones at as high a level as possible to create as much isolation as possible. Some example things to take into consideration can include physical location, power availability, and network connectivity. For example, in a small cluster you might decide to split the zones up by cabinet, with each cabinet having its own power and network connectivity. The zone concept is very abstract, so feel free to use it in whatever way best isolates your data from failure. Each zone exists in a region. A region is also an abstract concept that may be used to distinguish between geographically separated areas as well as can be used within same datacenter. Regions and zones are referenced by a positive integer. You can now start building the ring with: ``` swift-ring-builder <builderfile> create <partpower> <replicas> <minparthours> ``` This will start the ring build process creating the <builder_file> with 2^<partpower> partitions. <minpart_hours> is the time in hours before a specific partition can be moved in succession (24 is a good value for this). Devices can be added to the ring with: ``` swift-ring-builder <builderfile> add r<region>z<zone>-<ip>:<port>/<devicename>_<meta> <weight> ``` This will add a device to the ring where <builder_file> is the name of the builder file that was created previously, <region> is the number of the region the zone is in, <zone> is the number of the zone this device is in, <ip> is the ip address of the server the device is in, <port> is the port number that the server is running on, <device_name> is the name of the device on the server (for example: sdb1), <meta> is a string of metadata for the device (optional), and <weight> is a float weight that determines how many partitions are put on the device relative to the rest of the devices in the cluster (a good starting point is 100.0 x TB on the drive).Add each device that will be initially in the cluster. Once all of the devices are added to the ring, run: ``` swift-ring-builder <builder_file> rebalance ``` This will distribute the partitions across the drives in the ring. It is important whenever making changes to the ring to make all the changes required before running rebalance. This will ensure that the ring stays as balanced as possible, and as few partitions are moved as possible. The above process should be done to make a ring for each storage service (Account, Container and" }, { "data": "The builder files will be needed in future changes to the ring, so it is very important that these be kept and backed up. The resulting .tar.gz ring file should be pushed to all of the servers in the cluster. For more information about building rings, running swift-ring-builder with no options will display help text with available commands and options. More information on how the ring works internally can be found in the Ring Overview. The lack of true asynchronous file I/O on Linux leaves the object-server workers vulnerable to misbehaving disks. Because any object-server worker can service a request for any disk, and a slow I/O request blocks the eventlet hub, a single slow disk can impair an entire storage node. This also prevents object servers from fully utilizing all their disks during heavy load. Another way to get full I/O isolation is to give each disk on a storage node a different port in the storage policy rings. Then set the serversperport option in the object-server config. NOTE: while the purpose of this config setting is to run one or more object-server worker processes per disk, the implementation just runs object-servers per unique port of local devices in the rings. The deployer must combine this option with appropriately-configured rings to benefit from this feature. Heres an example (abbreviated) old-style ring (2 node cluster with 2 disks each): ``` Devices: id region zone ip address port replication ip replication port name 0 1 1 1.1.0.1 6200 1.1.0.1 6200 d1 1 1 1 1.1.0.1 6200 1.1.0.1 6200 d2 2 1 2 1.1.0.2 6200 1.1.0.2 6200 d3 3 1 2 1.1.0.2 6200 1.1.0.2 6200 d4 ``` And heres the same ring set up for serversperport: ``` Devices: id region zone ip address port replication ip replication port name 0 1 1 1.1.0.1 6200 1.1.0.1 6200 d1 1 1 1 1.1.0.1 6201 1.1.0.1 6201 d2 2 1 2 1.1.0.2 6200 1.1.0.2 6200 d3 3 1 2 1.1.0.2 6201 1.1.0.2 6201 d4 ``` When migrating from normal to serversperport, perform these steps in order: Upgrade Swift code to a version capable of doing serversperport. Enable serversperport with a value greater than zero. Restart swift-object-server processes with a SIGHUP. At this point, you will have the serversperport number of swift-object-server processes serving all requests for all disks on each node. This preserves availability, but you should perform the next step as quickly as possible. Push out new rings that actually have different ports per disk on each server. One of the ports in the new ring should be the same as the port used in the old ring (6200 in the example above). This will cover existing proxy-server processes who havent loaded the new ring yet. They can still talk to any storage node regardless of whether or not that storage node has loaded the ring and started object-server processes on the new ports. If you do not run a separate object-server for replication, then this setting must be available to the object-replicator and object-reconstructor (i.e. appear in the [DEFAULT] config section). Most Swift services fall into two categories. Swifts wsgi servers and background daemons. For more information specific to the configuration of Swifts wsgi servers with paste deploy see General Server" }, { "data": "Configuration for servers and daemons can be expressed together in the same file for each type of server, or separately. If a required section for the service trying to start is missing there will be an error. The sections not used by the service are ignored. Consider the example of an object storage node. By convention, configuration for the object-server, object-updater, object-replicator, object-auditor, and object-reconstructor exist in a single file /etc/swift/object-server.conf: ``` [DEFAULT] reclaim_age = 604800 [pipeline:main] pipeline = object-server [app:object-server] use = egg:swift#object [object-replicator] [object-updater] [object-auditor] ``` Swift services expect a configuration path as the first argument: ``` $ swift-object-auditor Usage: swift-object-auditor CONFIG [options] Error: missing config path argument ``` If you omit the object-auditor section this file could not be used as the configuration path when starting the swift-object-auditor daemon: ``` $ swift-object-auditor /etc/swift/object-server.conf Unable to find object-auditor config section in /etc/swift/object-server.conf ``` If the configuration path is a directory instead of a file all of the files in the directory with the file extension .conf will be combined to generate the configuration object which is delivered to the Swift service. This is referred to generally as directory based configuration. Directory based configuration leverages ConfigParsers native multi-file support. Files ending in .conf in the given directory are parsed in lexicographical order. Filenames starting with . are ignored. A mixture of file and directory configuration paths is not supported - if the configuration path is a file only that file will be parsed. The Swift service management tool swift-init has adopted the convention of looking for /etc/swift/{type}-server.conf.d/ if the file /etc/swift/{type}-server.conf file does not exist. When using directory based configuration, if the same option under the same section appears more than once in different files, the last value parsed is said to override previous occurrences. You can ensure proper override precedence by prefixing the files in the configuration directory with numerical values.: ``` /etc/swift/ default.base object-server.conf.d/ 000_default.conf -> ../default.base 001_default-override.conf 010_server.conf 020_replicator.conf 030_updater.conf 040_auditor.conf ``` You can inspect the resulting combined configuration object using the swift-config command line tool Swift uses paste.deploy (https://pypi.org/project/Paste/) to manage server configurations. Detailed descriptions of configuration options can be found in the configuration documentation. Default configuration options are set in the [DEFAULT] section, and any options specified there can be overridden in any of the other sections BUT ONLY BY USING THE SYNTAX set option_name = value. This is the unfortunate way paste.deploy works and Ill try to explain it in full. First, heres an example paste.deploy configuration file: ``` [DEFAULT] name1 = globalvalue name2 = globalvalue name3 = globalvalue set name4 = globalvalue [pipeline:main] pipeline = myapp [app:myapp] use = egg:mypkg#myapp name2 = localvalue set name3 = localvalue set name5 = localvalue name6 = localvalue ``` The resulting configuration that myapp receives is: ``` global {'file': '/etc/mypkg/wsgi.conf', 'here': '/etc/mypkg', 'name1': 'globalvalue', 'name2': 'globalvalue', 'name3': 'localvalue', 'name4': 'globalvalue', 'name5': 'localvalue', 'set name4': 'globalvalue'} local {'name6': 'localvalue'} ``` So, name1 got the global value which is fine since its only in the DEFAULT section anyway. name2 got the global value from DEFAULT even though it appears to be overridden in the app:myapp subsection. This is just the unfortunate way" }, { "data": "works (at least at the time of this writing.) name3 got the local value from the app:myapp subsection because it is using the special paste.deploy syntax of set option_name = value. So, if you want a default value for most app/filters but want to override it in one subsection, this is how you do it. name4 got the global value from DEFAULT since its only in that section anyway. But, since we used the set syntax in the DEFAULT section even though we shouldnt, notice we also got a set name4 variable. Weird, but probably not harmful. name5 got the local value from the app:myapp subsection since its only there anyway, but notice that it is in the global configuration and not the local configuration. This is because we used the set syntax to set the value. Again, weird, but not harmful since Swift just treats the two sets of configuration values as one set anyway. name6 got the local value from app:myapp subsection since its only there, and since we didnt use the set syntax, its only in the local configuration and not the global one. Though, as indicated above, there is no special distinction with Swift. Thats quite an explanation for something that should be so much simpler, but it might be important to know how paste.deploy interprets configuration files. The main rule to remember when working with Swift configuration files is: Note Use the set option_name = value syntax in subsections if the option is also set in the [DEFAULT] section. Dont get in the habit of always using the set syntax or youll probably mess up your non-paste.deploy configuration files. Some proxy-server configuration options may be overridden for individual Storage Policies by including per-policy config section(s). These options are: sorting_method read_affinity write_affinity writeaffinitynode_count writeaffinityhandoffdeletecount The per-policy config section name must be of the form: ``` [proxy-server:policy:<policy index>] ``` Note The per-policy config section name should refer to the policy index, not the policy name. Note The first part of proxy-server config section name must match the name of the proxy-server config section. This is typically proxy-server as shown above, but if different then the names of any per-policy config sections must be changed accordingly. The value of an option specified in a per-policy section will override any value given in the proxy-server section for that policy only. Otherwise the value of these options will be that specified in the proxy-server section. For example, the following section provides policy-specific options for a policy with index 3: ``` [proxy-server:policy:3] sorting_method = affinity read_affinity = r2=1 write_affinity = r2 writeaffinitynode_count = 1 * replicas writeaffinityhandoffdeletecount = 2 ``` Note It is recommended that per-policy config options are not included in the [DEFAULT] section. If they are then the following behavior applies. Per-policy config sections will inherit options in the [DEFAULT] section of the config file, and any such inheritance will take precedence over inheriting options from the proxy-server config section. Per-policy config section options will override options in the [DEFAULT] section. Unlike the behavior described under General Server Configuration for paste-deploy filter and app sections, the set keyword is not required for options to override in per-policy config" }, { "data": "For example, given the following settings in a config file: ``` [DEFAULT] sorting_method = affinity read_affinity = r0=100 write_affinity = r0 [app:proxy-server] use = egg:swift#proxy set read_affinity = r1=100 write_affinity = r1 [proxy-server:policy:0] sorting_method = affinity write_affinity = r1 ``` would result in policy with index 0 having settings: read_affinity = r0=100 (inherited from the [DEFAULT] section) write_affinity = r1 (specified in the policy 0 section) and any other policy would have the default settings of: read_affinity = r1=100 (set in the proxy-server section) write_affinity = r0 (inherited from the [DEFAULT] section) Many features in Swift are implemented as middleware in the proxy-server pipeline. See Middleware and the proxy-server.conf-sample file for more information. In particular, the use of some type of authentication and authorization middleware is highly recommended. Several of the Services rely on Memcached for caching certain types of lookups, such as auth tokens, and container/account existence. Swift does not do any caching of actual object data. Memcached should be able to run on any servers that have available RAM and CPU. Typically Memcached is run on the proxy servers. The memcache_servers config option in the proxy-server.conf should contain all memcached servers. When a container gets sharded the root container will still be the primary entry point to many container requests, as it provides the list of shards. To take load off the root container Swift by default caches the list of shards returned. As the number of shards for a root container grows to more than 3k the memcache default max size of 1MB can be reached. If you over-run your max configured memcache size youll see messages like: ``` Error setting value in memcached: 127.0.0.1:11211: SERVER_ERROR object too large for cache ``` When you see these messages your root containers are getting hammered and probably returning 503 reponses to clients. Override the default 1MB limit to 5MB with something like: ``` /usr/bin/memcached -I 5000000 ... ``` Memcache has a stats sizes option that can point out the current size usage. As this reaches the current max an increase might be in order: ``` stats sizes STAT 160 2 STAT 448 1 STAT 576 1 END ``` Time may be relative but it is relatively important for Swift! Swift uses timestamps to determine which is the most recent version of an object. It is very important for the system time on each server in the cluster to by synced as closely as possible (more so for the proxy server, but in general it is a good idea for all the servers). Typical deployments use NTP with a local NTP server to ensure that the system times are as close as possible. This should also be monitored to ensure that the times do not vary too much. Most services support either a workers or concurrency value in the settings. This allows the services to make effective use of the cores available. A good starting point is to set the concurrency level for the proxy and storage services to 2 times the number of cores available. If more than one service is sharing a server, then some experimentation may be needed to find the best" }, { "data": "For example, one operator reported using the following settings in a production Swift cluster: Proxy servers have dual quad core processors (i.e. 8 cores); testing has shown 16 workers to be a pretty good balance when saturating a 10g network and gives good CPU utilization. Storage server processes all run together on the same servers. These servers have dual quad core processors, for 8 cores total. The Account, Container, and Object servers are run with 8 workers each. Most of the background jobs are run at a concurrency of 1, with the exception of the replicators which are run at a concurrency of 2. The max_clients parameter can be used to adjust the number of client requests an individual worker accepts for processing. The fewer requests being processed at one time, the less likely a request that consumes the workers CPU time, or blocks in the OS, will negatively impact other requests. The more requests being processed at one time, the more likely one worker can utilize network and disk capacity. On systems that have more cores, and more memory, where one can afford to run more workers, raising the number of workers and lowering the maximum number of clients serviced per worker can lessen the impact of CPU intensive or stalled requests. The nice_priority parameter can be used to set program scheduling priority. The ioniceclass and ionicepriority parameters can be used to set I/O scheduling class and priority on the systems that use an I/O scheduler that supports I/O priorities. As at kernel 2.6.17 the only such scheduler is the Completely Fair Queuing (CFQ) I/O scheduler. If you run your Storage servers all together on the same servers, you can slow down the auditors or prioritize object-server I/O via these parameters (but probably do not need to change it on the proxy). It is a new feature and the best practices are still being developed. On some systems it may be required to run the daemons as root. For more info also see setpriority(2) and ioprio_set(2). The above configuration setting should be taken as suggestions and testing of configuration settings should be done to ensure best utilization of CPU, network connectivity, and disk I/O. Swift is designed to be mostly filesystem agnosticthe only requirement being that the filesystem supports extended attributes (xattrs). After thorough testing with our use cases and hardware configurations, XFS was the best all-around choice. If you decide to use a filesystem other than XFS, we highly recommend thorough testing. For distros with more recent kernels (for example Ubuntu 12.04 Precise), we recommend using the default settings (including the default inode size of 256 bytes) when creating the file system: ``` mkfs.xfs -L D1 /dev/sda1 ``` In the last couple of years, XFS has made great improvements in how inodes are allocated and used. Using the default inode size no longer has an impact on performance. For distros with older kernels (for example Ubuntu 10.04 Lucid), some settings can dramatically impact performance. We recommend the following when creating the file system: ``` mkfs.xfs -i size=1024 -L D1 /dev/sda1 ``` Setting the inode size is important, as XFS stores xattr data in the" }, { "data": "If the metadata is too large to fit in the inode, a new extent is created, which can cause quite a performance problem. Upping the inode size to 1024 bytes provides enough room to write the default metadata, plus a little headroom. The following example mount options are recommended when using XFS: ``` mount -t xfs -o noatime -L D1 /srv/node/d1 ``` We do not recommend running Swift on RAID, but if you are using RAID it is also important to make sure that the proper sunit and swidth settings get set so that XFS can make most efficient use of the RAID array. For a standard Swift install, all data drives are mounted directly under /srv/node (as can be seen in the above example of mounting label D1 as /srv/node/d1). If you choose to mount the drives in another directory, be sure to set the devices config option in all of the server configs to point to the correct directory. The mount points for each drive in /srv/node/ should be owned by the root user almost exclusively (root:root 755). This is required to prevent rsync from syncing files into the root drive in the event a drive is unmounted. Swift uses system calls to reserve space for new objects being written into the system. If your filesystem does not support fallocate() or posixfallocate(), be sure to set the disablefallocate = true config parameter in account, container, and object server configs. Most current Linux distributions ship with a default installation of updatedb. This tool runs periodically and updates the file name database that is used by the GNU locate tool. However, including Swift object and container database files is most likely not required and the periodic update affects the performance quite a bit. To disable the inclusion of these files add the path where Swift stores its data to the setting PRUNEPATHS in /etc/updatedb.conf: ``` PRUNEPATHS=\"... /tmp ... /var/spool ... /srv/node\" ``` The following changes have been found to be useful when running Swift on Ubuntu Server 10.04. The following settings should be in /etc/sysctl.conf: ``` net.ipv4.tcptwrecycle=1 net.ipv4.tcptwreuse=1 net.ipv4.tcp_syncookies = 0 net.netfilter.nfconntrackmax = 262144 ``` To load the updated sysctl settings, run sudo sysctl -p. A note about changing the TIME_WAIT values. By default the OS will hold a port open for 60 seconds to ensure that any remaining packets can be received. During high usage, and with the number of connections that are created, it is easy to run out of ports. We can change this since we are in control of the network. If you are not in control of the network, or do not expect high loads, then you may not want to adjust those values. Swift is set up to log directly to syslog. Every service can be configured with the log_facility option to set the syslog log facility destination. We recommended using syslog-ng to route the logs to specific log files locally on the server and also to remote log collecting servers. Additionally, custom log handlers can be used via the customloghandlers setting. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "first_contribution_swift.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Effective code review is a skill like any other professional skill you develop with experience. Effective code review requires trust. No one is perfect. Everyone makes mistakes. Trust builds over time. This document will enumerate behaviors commonly observed and associated with competent reviews of changes purposed to the Swift code base. No one is expected to follow these steps. Guidelines are not rules, not all behaviors will be relevant in all situations. Code review is collaboration, not judgement. Alistair Coles You will need to have a copy of the change in an environment where you can freely edit and experiment with the code in order to provide a non-superficial review. Superficial reviews are not terribly helpful. Always try to be helpful. ;) Check out the change so that you may begin. Commonly, git review -d <change-id> Imagine that you submit a patch to Swift, and a reviewer starts to take a look at it. Your commit message on the patch claims that it fixes a bug or adds a feature, but as soon as the reviewer downloads it locally and tries to test it, a severe and obvious error shows up. Something like a syntax error or a missing dependency. Did you even run this? is the review comment all contributors dread. Reviewers in particular need to be fearful merging changes that just dont work - or at least fail in frequently common enough scenarios to be considered horribly broken. A comment in our review that says roughly I ran this on my machine and observed description of behavior change is supposed to achieve is the most powerful defense we have against the terrible scorn from our fellow Swift developers and operators when we accidentally merge bad code. If youre doing a fair amount of reviews - you will participate in merging a change that will break my clusters - its cool - Ill do it to you at some point too (sorry about that). But when either of us go look at the reviews to understand the process gap that allowed this to happen - it better not be just because we were too lazy to check it out and run it before it got merged. Or be warned, you may receive, the dreaded Did you even run this? Im sorry, I know its rough. ;) Saying that should rarely happen is the same as saying that will happen Douglas Crockford Scale is an amazingly abusive partner. If you contribute changes to Swift your code is running - in production - at scale - and your bugs cannot hide. I wish on all of us that our bugs may be exceptionally rare - meaning they only happen in extremely unlikely edge cases. For example, bad things that happen only 1 out of every 10K times an op is performed will be discovered in minutes. Bad things that happen only 1 out of every one billion times something happens will be observed - by multiple deployments - over the course of a release. Bad things that happen 1/100 times some op is performed are considered horribly broken. Tests must exhaustively exercise possible scenarios. Every system call and network connection will raise an error and timeout - where will that Exception be caught? Yes, I know Gerrit does this already. You can do it" }, { "data": "You might not need to re-run all the tests on your machine - it depends on the change. But, if youre not sure which will be most useful - running all of them best - unit - functional - probe. If you cant reliably get all tests passing in your development environment you will not be able to do effective reviews. Whatever tests/suites you are able to exercise/validate on your machine against your config you should mention in your review comments so that other reviewers might choose to do other testing locally when they have the change checked out. e.g. I went ahead and ran probe/testobjectmetadata_replication.py on my machine with both syncmethod = rsync and syncmethod = ssync - that works for me - but I didnt try it with objectpostas_copy = false Style is an important component to review. The goal is maintainability. However, keep in mind that generally style, readability and maintainability are orthogonal to the suitability of a change for merge. A critical bug fix may be a well written pythonic masterpiece of style - or it may be a hack-y ugly mess that will absolutely need to be cleaned up at some point - but it absolutely should merge because: CRITICAL. BUG. FIX. You should comment inline to praise code that is obvious. You should comment inline to highlight code that you found to be obfuscated. Unfortunately readability is often subjective. We should remember that its probably just our own personal preference. Rather than a comment that says You should use a list comprehension here - rewrite the code as a list comprehension, run the specific tests that hit the relevant section to validate your code is correct, then leave a comment that says: I find this more readable: diff with working tested code If the author (or another reviewer) agrees - its possible the change will get updated to include that improvement before it is merged; or it may happen in a follow-up change. However, remember that style is non-material - it is useful to provide (via diff) suggestions to improve maintainability as part of your review - but if the suggestion is functionally equivalent - it is by definition optional. Read the commit message thoroughly before you begin the review. Commit messages must answer the why and the what for - more so than the how or what it does. Commonly this will take the form of a short description: What is broken - without this change What is impossible to do with Swift - without this change What is slower/worse/harder - without this change If youre not able to discern why a change is being made or how it would be used - you may have to ask for more details before you can successfully review it. Commit messages need to have a high consistent quality. While many things under source control can be fixed and improved in a follow-up change - commit messages are forever. Luckily its easy to fix minor mistakes using the in-line edit feature in Gerrit! If you can avoid ever having to ask someone to change a commit message you will find yourself an amazingly happier and more productive reviewer. Also commit messages should follow the OpenStack Commit Message guidelines, including references to relevant impact tags or bug numbers. You should hand out links to the OpenStack Commit Message guidelines liberally via comments when fixing commit messages during review. Here you go: GitCommitMessages New tests should be added for all code" }, { "data": "Historically you should expect good changes to have a diff line count ratio of at least 2:1 tests to code. Even if a change has to fix a lot of existing tests, if a change does not include any new tests it probably should not merge. If a change includes a good ratio of test changes and adds new tests - you should say so in your review comments. If it does not - you should write some! and offer them to the patch author as a diff indicating to them that something like these tests Im providing as an example will need to be included in this change before it is suitable to merge. Bonus points if you include suggestions for the author as to how they might improve or expand upon the tests stubs you provide. Be very careful about asking an author to add a test for a small change before attempting to do so yourself. Its quite possible there is a lack of existing test infrastructure needed to develop a concise and clear test - the author of a small change may not be the best person to introduce a large amount of new test infrastructure. Also, most of the time remember its harder to write the test than the change - if the author is unable to develop a test for their change on their own you may prevent a useful change from being merged. At a minimum you should suggest a specific unit test that you think they should be able to copy and modify to exercise the behavior in their change. If youre not sure if such a test exists - replace their change with an Exception and run tests until you find one that blows up. Most changes should include documentation. New functions and code should have Docstrings. Tests should obviate new or changed behaviors with descriptive and meaningful phrases. New features should include changes to the documentation tree. New config options should be documented in example configs. The commit message should document the change for the change log. Always point out typos or grammar mistakes when you see them in review, but also consider that if you were able to recognize the intent of the statement - documentation with typos may be easier to iterate and improve on than nothing. If a change does not have adequate documentation it may not be suitable to merge. If a change includes incorrect or misleading documentation or is contrary to existing documentation is probably is not suitable to merge. Every change could have better documentation. Like with tests, a patch isnt done until it has docs. Any patch that adds a new feature, changes behavior, updates configs, or in any other way is different than previous behavior requires docs. manpages, sample configs, docstrings, descriptive prose in the source tree, etc. Reviews have been shown to provide many benefits - one of which is shared ownership. After providing a positive review you should understand how the change works. Doing this will probably require you to play with the change. You might functionally test the change in various scenarios. You may need to write a new unit test to validate the change will degrade gracefully under failure. You might have to write a script to exercise the change under some superficial load. You might have to break the change and validate the new tests fail and provide useful" }, { "data": "You might have to step through some critical section of the code in a debugger to understand when all the possible branches are exercised in tests. When youre done with your review an artifact of your effort will be observable in the piles of code and scripts and diffs you wrote while reviewing. You should make sure to capture those artifacts in a paste or gist and include them in your review comments so that others may reference them. e.g. When I broke the change like this: diff it blew up like this: unit test failure Its not uncommon that a review takes more time than writing a change - hopefully the author also spent as much time as you did validating their change but thats not really in your control. When you provide a positive review you should be sure you understand the change - even seemingly trivial changes will take time to consider the ramifications. Leave. Lots. Of. Comments. A popular web comic has stated that WTFs/Minute is the only valid measurement of code quality. If something initially strikes you as questionable - you should jot down a note so you can loop back around to it. However, because of the distributed nature of authors and reviewers its imperative that you try your best to answer your own questions as part of your review. Do not say Does this blow up if it gets called when xyz - rather try and find a test that specifically covers that condition and mention it in the comment so others can find it more quickly. Or if you can find no such test, add one to demonstrate the failure, and include a diff in a comment. Hopefully you can say I thought this would blow up, so I wrote this test, but it seems fine. But if your initial reaction is I dont understand this or How does this even work? you should notate it and explain whatever you were able to figure out in order to help subsequent reviewers more quickly identify and grok the subtle or complex issues. Because you will be leaving lots of comments - many of which are potentially not highlighting anything specific - it is VERY important to leave a good summary. Your summary should include details of how you reviewed the change. You may include what you liked most, or least. If you are leaving a negative score ideally you should provide clear instructions on how the change could be modified such that it would be suitable for merge - again diffs work best. Scoring is subjective. Try to realize youre making a judgment call. A positive score means you believe Swift would be undeniably better off with this code merged than it would be going one more second without this change running in production immediately. It is indeed high praise - you should be sure. A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks. It is a surprisingly difficult chore to be confident about the value of unproven code or a not well understood use-case in an uncertain world, and unfortunately all too easy with a thorough review to uncover our defects, and be reminded of the risk of" }, { "data": "Reviewers must try very hard first and foremost to keep master stable. If you can demonstrate a change has an incorrect behavior its almost without exception that the change must be revised to fix the defect before merging rather than letting it in and having to also file a bug. Every commit must be deployable to production. Beyond that - almost any change might be merge-able depending on its merits! Here are some tips you might be able to use to find more changes that should merge! Fixing bugs is HUGELY valuable - the only thing which has a higher cost than the value of fixing a bug - is adding a new bug - if its broken and this change makes it fixed (without breaking anything else) you have a winner! Features are INCREDIBLY difficult to justify their value against the cost of increased complexity, lowered maintainability, risk of regression, or new defects. Try to focus on what is impossible without the feature - when you make the impossible possible, things are better. Make things better. Purely test/doc changes, complex refactoring, or mechanical cleanups are quite nuanced because theres less concrete objective value. Ive seen lots of these kind of changes get lost to the backlog. Ive also seen some success where multiple authors have collaborated to push-over a change rather than provide a review ultimately resulting in a quorum of three or more authors who all agree there is a lot of value in the change - however subjective. Because the bar is high - most reviews will end with a negative score. However, for non-material grievances (nits) - you should feel confident in a positive review if the change is otherwise complete correct and undeniably makes Swift better (not perfect, better). If you see something worth fixing you should point it out in review comments, but when applying a score consider if it need be fixed before the change is suitable to merge vs. fixing it in a follow up change? Consider if the change makes Swift so undeniably better and it was deployed in production without making any additional changes would it still be correct and complete? Would releasing the change to production without any additional follow up make it more difficult to maintain and continue to improve Swift? Endeavor to leave a positive or negative score on every change you review. Use your best judgment. Swift Core maintainers may provide positive reviews scores that look different from your reviews - a +2 instead of a +1. But its exactly the same as your +1. It means the change has been thoroughly and positively reviewed. The only reason its different is to help identify changes which have received multiple competent and positive reviews. If you consistently provide competent reviews you run a VERY high risk of being approached to have your future positive review scores changed from a +1 to +2 in order to make it easier to identify changes which need to get merged. Ideally a review from a core maintainer should provide a clear path forward for the patch author. If you dont know how to proceed respond to the reviewers comments on the change and ask for help. Wed love to try and help. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "genindex.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply. This documentation is generated by the Sphinx toolkit and lives in the source tree. Additional documentation on Swift and other components of OpenStack can be found on the OpenStack wiki and at http://docs.openstack.org. Note If youre looking for associated projects that enhance or use Swift, please see the Associated Projects page. See Complete Reference for the Object Storage REST API The following provides supporting information for the REST API: The OpenStack End User Guide has additional information on using Swift. See the Manage objects and containers section. Index Module Index Search Page Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "index.html#contributor-guides.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "docs.openstack.org.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "index.html#api-guides.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "index.html#install-guides.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "What is OpenStack? OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. This is the latest release. Use the top menu to select a prior release if needed. New features, upgrade and deprecation notes, known issues, and bug fixes Getting started with the most commonly used OpenStack services Choose how to deploy OpenStack and get started with the most commonly used OpenStack services Manage and troubleshoot an OpenStack cloud Install and configure OpenStack for high availability Plan and design an OpenStack cloud Operate an OpenStack cloud Guidelines and scenarios for creating more secure OpenStack clouds Obtain, create, and modify OpenStack-compatible virtual machine images Installation and configuration options for OpenStack OpenStack API Documentation Create and manage resources using the OpenStack dashboard, command-line client, and Python SDK Resources for application development on OpenStack clouds Documentation for OpenStack services and libraries Documentation for the OpenStack Python bindings and clients The Extended Maintenance SIG manages the existing stable branches Self-healing use cases and implementation details The journey of running OpenStack at large scale The contribution process explained Documentation workflow and conventions OpenStack Technical Committee reference documents and official resolutions Specifications for future project features Guide to the OpenStack project and community Community-managed development and communication systems Internationalization workflow and conventions How to join the Open Infrastructure Foundation Influence the future of OpenStack Resources for the OpenStack Upstream Training program Documentation treated like code, powered by the community - interested? Currently viewing which is the current supported release. The OpenStack project is provided under the Apache 2.0 license. Openstack.org is powered by VEXXHOST ." } ]
{ "category": "Runtime", "file_name": "index.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift can be configured to work both using an integral web front-end and using a full-fledged Web Server such as the Apache2 (HTTPD) web server. The integral web front-end is a wsgi mini Web Server which opens up its own socket and serves http requests directly. The incoming requests accepted by the integral web front-end are then forwarded to a wsgi application (the core swift) for further handling, possibly via wsgi middleware sub-components. client<->integral web front-end<->middleware<->core swift To gain full advantage of Apache2, Swift can alternatively be configured to work as a request processor of the Apache2 server. This alternative deployment scenario uses mod_wsgi of Apache2 to forward requests to the swift wsgi application and middleware. client<->Apache2 with mod_wsgi<>middleware<->core swift The integral web front-end offers simplicity and requires minimal configuration. It is also the web front-end most commonly used with Swift. Additionally, the integral web front-end includes support for receiving chunked transfer encoding from a client, presently not supported by Apache2 in the operation mode described here. The use of Apache2 offers new ways to extend Swift and integrate it with existing authentication, administration and control systems. A single Apache2 server can serve as the web front end of any number of swift servers residing on a swift node. For example when a storage node offers account, container and object services, a single Apache2 server can serve as the web front end of all three services. The apache variant described here was tested as part of an IBM research work. It was found that following tuning, the Apache2 offer generally equivalent performance to that offered by the integral web front-end. Alternative to Apache2, other web servers may be used, but were never tested. Both Apache2 and mod-wsgi needs to be installed on the system. Ubuntu comes with Apache2 installed. Install mod-wsgi using: ``` sudo apt-get install libapache2-mod-wsgi ``` Create a directory for the Apache2 wsgi files: ``` sudo mkdir /srv/www/swift ``` Create a working directory for the wsgi processes: ``` sudo mkdir -m 2770 /var/lib/swift sudo chown swift:swift /var/lib/swift ``` Create a file for each service under /srv/www/swift. For a proxy service create /srv/www/swift/proxy-server.wsgi: ``` from swift.common.wsgi import initrequestprocessor application, conf, logger, log_name = \\ initrequestprocessor('/etc/swift/proxy-server.conf','proxy-server') ``` For an account service create /srv/www/swift/account-server.wsgi: ``` from" }, { "data": "import initrequestprocessor application, conf, logger, log_name = \\ initrequestprocessor('/etc/swift/account-server.conf', 'account-server') ``` For an container service create /srv/www/swift/container-server.wsgi: ``` from swift.common.wsgi import initrequestprocessor application, conf, logger, log_name = \\ initrequestprocessor('/etc/swift/container-server.conf', 'container-server') ``` For an object service create /srv/www/swift/object-server.wsgi: ``` from swift.common.wsgi import initrequestprocessor application, conf, logger, log_name = \\ initrequestprocessor('/etc/swift/object-server.conf', 'object-server') ``` Create a /etc/apache2/conf.d/swift_wsgi.conf configuration file that will define a port and Virtual Host per each local service. For example an Apache2 serving as a web front end of a proxy service: ``` Listen 8080 <VirtualHost *:8080> ServerName proxy-server LimitRequestBody 5368709122 LimitRequestFields 200 WSGIDaemonProcess proxy-server processes=5 threads=1 user=swift group=swift display-name=%{GROUP} WSGIProcessGroup proxy-server WSGIScriptAlias / /srv/www/swift/proxy-server.wsgi LogLevel debug CustomLog /var/log/apache2/proxy.log combined ErrorLog /var/log/apache2/proxy-server </VirtualHost> ``` Notice that when using Apache the limit on the maximal object size should be imposed by Apache using the LimitRequestBody rather by the swift proxy. Note also that the LimitRequestBody should indicate the same value as indicated by maxfilesize located in both /etc/swift/swift.conf and in /etc/swift/test.conf. The Swift default value for maxfilesize (when not present) is 5368709122. For example an Apache2 serving as a web front end of a storage node: ``` Listen 6200 <VirtualHost *:6200> ServerName object-server LimitRequestFields 200 WSGIDaemonProcess object-server processes=5 threads=1 user=swift group=swift display-name=%{GROUP} WSGIProcessGroup object-server WSGIScriptAlias / /srv/www/swift/object-server.wsgi LogLevel debug CustomLog /var/log/apache2/access.log combined ErrorLog /var/log/apache2/object-server </VirtualHost> Listen 6201 <VirtualHost *:6201> ServerName container-server LimitRequestFields 200 WSGIDaemonProcess container-server processes=5 threads=1 user=swift group=swift display-name=%{GROUP} WSGIProcessGroup container-server WSGIScriptAlias / /srv/www/swift/container-server.wsgi LogLevel debug CustomLog /var/log/apache2/access.log combined ErrorLog /var/log/apache2/container-server </VirtualHost> Listen 6202 <VirtualHost *:6202> ServerName account-server LimitRequestFields 200 WSGIDaemonProcess account-server processes=5 threads=1 user=swift group=swift display-name=%{GROUP} WSGIProcessGroup account-server WSGIScriptAlias / /srv/www/swift/account-server.wsgi LogLevel debug CustomLog /var/log/apache2/access.log combined ErrorLog /var/log/apache2/account-server </VirtualHost> ``` Enable the newly configured Virtual Hosts: ``` a2ensite swift_wsgi.conf ``` Next, stop, test and start Apache2 again: ``` systemctl stop apache2.service apache2ctl -t systemctl start apache2.service ``` Edit the tests config file and add: ``` webfrontend = apache2 normalized_urls = True ``` Also check to see that the file includes maxfilesize of the same value as used for the LimitRequestBody in the apache config file above. We are done. You may run functional tests to test - e.g.: ``` cd ~swift/swift ./.functests ``` Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "logs.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swifts source code is hosted on github and managed with git. The current trunk can be checked out like this: ``` git clone https://github.com/openstack/swift.git ``` This will clone the Swift repository under your account. A source tarball for the latest release of Swift is available on the launchpad project page. Prebuilt packages for Ubuntu and RHEL variants are available. Swift Ubuntu Packages Swift RDO Packages Swift uses git for source control. The OpenStack Developers Guide describes the steps for setting up Git and all the necessary accounts for contributing code to Swift. Once you have the source code and source control set up, you can make your changes to Swift. The Development Guidelines describe the testing requirements before submitting Swift code. In summary, you can execute tox from the swift home directory (where you checked out the source code): ``` tox ``` Tox will present tests results. Notice that in the beginning, it is very common to break many coding style guidelines. The OpenStack Developers Guide describes the most common git commands that you will need. Following is a list of the commands that you need to know for your first contribution to Swift: To clone a copy of Swift: ``` git clone https://github.com/openstack/swift.git ``` Under the swift directory, set up the Gerrit repository. The following command configures the repository to know about Gerrit and installs the Change-Id commit hook. You only need to do this once: ``` git review -s ``` To create your development branch (substitute branch_name for a name of your choice: ``` git checkout -b <branch_name> ``` To check the files that have been updated in your branch: ``` git status ``` To check the differences between your branch and the repository: ``` git diff ``` Assuming you have not added new files, you commit all your changes using: ``` git commit -a ``` Read the Summary of Git commit message structure for best practices on writing the commit message. When you are ready to send your changes for review use: ``` git review ``` If successful, Git response message will contain a URL you can use to track your changes. If you need to make further changes to the same review, you can commit them using: ``` git commit -a --amend ``` This will commit the changes under the same set of changes you issued earlier. Notice that in order to send your latest version for review, you will still need to call: ``` git review ``` After proposing changes to Swift, you can track them at" }, { "data": "After logging in, you will see a dashboard of Outgoing reviews for changes you have proposed, Incoming reviews for changes you are reviewing, and Recently closed changes for which you were either a reviewer or owner. After rebasing, the following steps should be performed to rebuild the swift installation. Note that these commands should be performed from the root of the swift repo directory (e.g. $HOME/swift/): ``` sudo python setup.py develop sudo pip install -r test-requirements.txt ``` If using TOX, depending on the changes made during the rebase, you may need to rebuild the TOX environment (generally this will be the case if test-requirements.txt was updated such that a new version of a package is required), this can be accomplished using the -r argument to the TOX cli: ``` tox -r ``` You can include any of the other TOX arguments as well, for example, to run the pep8 suite and rebuild the TOX environment the following can be used: ``` tox -r -e pep8 ``` The rebuild option only needs to be specified once for a particular build (e.g. pep8), that is further invocations of the same build will not require this until the next rebase. You may run into the following errors when starting Swift if you rebase your commit using: ``` git rebase ``` ``` Traceback (most recent call last): File \"/usr/local/bin/swift-init\", line 5, in <module> from pkg_resources import require File \"/usr/lib/python2.7/dist-packages/pkg_resources.py\", line 2749, in <module> workingset = WorkingSet.build_master() File \"/usr/lib/python2.7/dist-packages/pkgresources.py\", line 446, in build_master return cls.buildfromrequirements(requires_) File \"/usr/lib/python2.7/dist-packages/pkgresources.py\", line 459, in buildfromrequirements dists = ws.resolve(reqs, Environment()) File \"/usr/lib/python2.7/dist-packages/pkg_resources.py\", line 628, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: swift==2.3.1.devXXX ``` (where XXX represents a dev version of Swift). ``` Traceback (most recent call last): File \"/usr/local/bin/swift-proxy-server\", line 10, in <module> execfile(file) File \"/home/swift/swift/bin/swift-proxy-server\", line 23, in <module> sys.exit(runwsgi(conffile, 'proxy-server', options)) File \"/home/swift/swift/swift/common/wsgi.py\", line 888, in run_wsgi loadapp(confpath, globalconf=global_conf) File \"/home/swift/swift/swift/common/wsgi.py\", line 390, in loadapp func(PipelineWrapper(ctx)) File \"/home/swift/swift/swift/proxy/server.py\", line 602, in modifywsgipipeline ctx = pipe.createfilter(filtername) File \"/home/swift/swift/swift/common/wsgi.py\", line 329, in create_filter globalconf=self.context.globalconf) File \"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py\", line 296, in loadcontext globalconf=globalconf) File \"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py\", line 328, in _loadegg return loader.getcontext(objecttype, name, global_conf) File \"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py\", line 620, in get_context object_type, name=name) File \"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py\", line 659, in findeggentry_point for prot in protocol_options] or '(no entry points)')))) LookupError: Entry point 'versionedwrites' not found in egg 'swift' (dir: /home/swift/swift; protocols: paste.filterfactory, paste.filterappfactory; entry_points: ) ``` This happens because git rebase will retrieve code for a different version of Swift in the development stream, but the start scripts under /usr/local/bin have not been updated. The solution is to follow the steps described in the Post rebase instructions section. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "managing-openstack-object-storage-with-swift-cli.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply. This documentation is generated by the Sphinx toolkit and lives in the source tree. Additional documentation on Swift and other components of OpenStack can be found on the OpenStack wiki and at http://docs.openstack.org. Note If youre looking for associated projects that enhance or use Swift, please see the Associated Projects page. See Complete Reference for the Object Storage REST API The following provides supporting information for the REST API: The OpenStack End User Guide has additional information on using Swift. See the Manage objects and containers section. Index Module Index Search Page Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "large_objects.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Defining your Storage Policies is very easy to do with Swift. It is important that the administrator understand the concepts behind Storage Policies before actually creating and using them in order to get the most benefit out of the feature and, more importantly, to avoid having to make unnecessary changes once a set of policies have been deployed to a cluster. It is highly recommended that the reader fully read and comprehend Storage Policies before proceeding with administration of policies. Plan carefully and it is suggested that experimentation be done first on a non-production cluster to be certain that the desired configuration meets the needs of the users. See Upgrading and Confirming Functionality before planning the upgrade of your existing deployment. Following is a high level view of the very few steps it takes to configure policies once you have decided what you want to do: Define your policies in /etc/swift/swift.conf Create the corresponding object rings Communicate the names of the Storage Policies to cluster users For a specific example that takes you through these steps, please see Adding Storage Policies to an Existing SAIO You may build the storage rings on any server with the appropriate version of Swift installed. Once built or changed (rebalanced), you must distribute the rings to all the servers in the cluster. Storage rings contain information about all the Swift storage partitions and how they are distributed between the different nodes and disks. Swift 1.6.0 is the last version to use a Python pickle format. Subsequent versions use a different serialization format. Rings generated by Swift versions 1.6.0 and earlier may be read by any version, but rings generated after 1.6.0 may only be read by Swift versions greater than 1.6.0. So when upgrading from version 1.6.0 or earlier to a version greater than 1.6.0, either upgrade Swift on your ring building server last after all Swift nodes have been successfully upgraded, or refrain from generating rings until all Swift nodes have been successfully upgraded. If you need to downgrade from a version of Swift greater than 1.6.0 to a version less than or equal to 1.6.0, first downgrade your ring-building server, generate new rings, push them out, then continue with the rest of the downgrade. For more information see The Rings. Removing a device from the ring: ``` swift-ring-builder <builder-file> remove <ipaddress>/<devicename> ``` Removing a server from the ring: ``` swift-ring-builder <builder-file> remove <ip_address> ``` Adding devices to the ring: See Preparing the Ring See what devices for a server are in the ring: ``` swift-ring-builder <builder-file> search <ip_address> ``` Once you are done with all changes to the ring, the changes need to be committed: ``` swift-ring-builder <builder-file> rebalance ``` Once the new rings are built, they should be pushed out to all the servers in the cluster. Optionally, if invoked as swift-ring-builder-safe the directory containing the specified builder file will be locked (via a .lock file in the parent directory). This provides a basic safe guard against multiple instances of the swift-ring-builder (or other utilities that observe this lock) from attempting to write to or read the builder/ring files while operations are in progress. This can be useful in environments where ring management has been automated but the operator still needs to interact with the rings manually. If the ring builder is not producing the balances that you are expecting, you can gain visibility into what its doing with the --debug" }, { "data": "``` swift-ring-builder <builder-file> rebalance --debug ``` This produces a great deal of output that is mostly useful if you are either (a) attempting to fix the ring builder, or (b) filing a bug against the ring builder. You may notice in the rebalance output a dispersion number. What this number means is explained in Dispersion but in essence is the percentage of partitions in the ring that have too many replicas within a particular failure domain. You can ask swift-ring-builder what the dispersion is with: ``` swift-ring-builder <builder-file> dispersion ``` This will give you the percentage again, if you want a detailed view of the dispersion simply add a --verbose: ``` swift-ring-builder <builder-file> dispersion --verbose ``` This will not only display the percentage but will also display a dispersion table that lists partition dispersion by tier. You can use this table to figure out were you need to add capacity or to help tune an Overload value. Now lets take an example with 1 region, 3 zones and 4 devices. Each device has the same weight, and the dispersion --verbose might show the following: ``` Dispersion is 16.666667, Balance is 0.000000, Overload is 0.00% Required overload is 33.333333% Worst tier is 33.333333 (r1z3) -- Tier Parts % Max 0 1 2 3 -- r1 768 0.00 3 0 0 0 256 r1z1 192 0.00 1 64 192 0 0 r1z1-127.0.0.1 192 0.00 1 64 192 0 0 r1z1-127.0.0.1/sda 192 0.00 1 64 192 0 0 r1z2 192 0.00 1 64 192 0 0 r1z2-127.0.0.2 192 0.00 1 64 192 0 0 r1z2-127.0.0.2/sda 192 0.00 1 64 192 0 0 r1z3 384 33.33 1 0 128 128 0 r1z3-127.0.0.3 384 33.33 1 0 128 128 0 r1z3-127.0.0.3/sda 192 0.00 1 64 192 0 0 r1z3-127.0.0.3/sdb 192 0.00 1 64 192 0 0 ``` The first line reports that there are 256 partitions with 3 copies in region 1; and this is an expected output in this case (single region with 3 replicas) as reported by the Max value. However, there is some imbalance in the cluster, more precisely in zone 3. The Max reports a maximum of 1 copy in this zone; however 50.00% of the partitions are storing 2 replicas in this zone (which is somewhat expected, because there are more disks in this zone). You can now either add more capacity to the other zones, decrease the total weight in zone 3 or set the overload to a value greater than 33.333333% - only as much overload as needed will be used. You can create scripts to create the account and container rings and rebalance. Heres an example script for the Account ring. Use similar commands to create a make-container-ring.sh script on the proxy server node. Create a script file called make-account-ring.sh on the proxy server node with the following content: ``` cd /etc/swift rm -f account.builder account.ring.gz backups/account.builder backups/account.ring.gz swift-ring-builder account.builder create 18 3 1 swift-ring-builder account.builder add r1z1-<account-server-1>:6202/sdb1 1 swift-ring-builder account.builder add r1z2-<account-server-2>:6202/sdb1 1 swift-ring-builder account.builder rebalance ``` You need to replace the values of <account-server-1>, <account-server-2>, etc. with the IP addresses of the account servers used in your setup. You can have as many account servers as you need. All account servers are assumed to be listening on port 6202, and have a storage device called sdb1 (this is a directory name created under /drives when we setup the account server). The z1, z2, etc. designate zones, and you can choose whether you put devices in the same or different zones. The r1 designates the region, with different regions specified as r1, r2," }, { "data": "Make the script file executable and run it to create the account ring file: ``` chmod +x make-account-ring.sh sudo ./make-account-ring.sh ``` Copy the resulting ring file /etc/swift/account.ring.gz to all the account server nodes in your Swift environment, and put them in the /etc/swift directory on these nodes. Make sure that every time you change the account ring configuration, you copy the resulting ring file to all the account nodes. It is recommended that system updates and reboots are done a zone at a time. This allows the update to happen, and for the Swift cluster to stay available and responsive to requests. It is also advisable when updating a zone, let it run for a while before updating the other zones to make sure the update doesnt have any adverse effects. In the event that a drive has failed, the first step is to make sure the drive is unmounted. This will make it easier for Swift to work around the failure until it has been resolved. If the drive is going to be replaced immediately, then it is just best to replace the drive, format it, remount it, and let replication fill it up. After the drive is unmounted, make sure the mount point is owned by root (root:root 755). This ensures that rsync will not try to replicate into the root drive once the failed drive is unmounted. If the drive cant be replaced immediately, then it is best to leave it unmounted, and set the device weight to 0. This will allow all the replicas that were on that drive to be replicated elsewhere until the drive is replaced. Once the drive is replaced, the device weight can be increased again. Setting the device weight to 0 instead of removing the drive from the ring gives Swift the chance to replicate data from the failing disk too (in case it is still possible to read some of the data). Setting the device weight to 0 (or removing a failed drive from the ring) has another benefit: all partitions that were stored on the failed drive are distributed over the remaining disks in the cluster, and each disk only needs to store a few new partitions. This is much faster compared to replicating all partitions to a single, new disk. It decreases the time to recover from a degraded number of replicas significantly, and becomes more and more important with bigger disks. If a server is having hardware issues, it is a good idea to make sure the Swift services are not running. This will allow Swift to work around the failure while you troubleshoot. If the server just needs a reboot, or a small amount of work that should only last a couple of hours, then it is probably best to let Swift work around the failure and get the machine fixed and back online. When the machine comes back online, replication will make sure that anything that is missing during the downtime will get updated. If the server has more serious issues, then it is probably best to remove all of the servers devices from the ring. Once the server has been repaired and is back online, the servers devices can be added back into the ring. It is important that the devices are reformatted before putting them back into the ring as it is likely to be responsible for a different set of partitions than before. It has been our experience that when a drive is about to fail, error messages will spew into" }, { "data": "There is a script called swift-drive-audit that can be run via cron to watch for bad drives. If errors are detected, it will unmount the bad drive, so that Swift can work around it. The script takes a configuration file with the following settings: [drive-audit] | 0 | 1 | 2 | |:--|:|:-| | Option | Default | Description | | user | swift | Drop privileges to this user for non-root tasks | | logfacility | LOGLOCAL0 | Syslog log facility | | log_level | INFO | Log level | | device_dir | /srv/node | Directory devices are mounted under | | minutes | 60 | Number of minutes to look back in /var/log/kern.log | | error_limit | 1 | Number of errors to find before a device is unmounted | | logfilepattern | /var/log/kern* | Location of the log file with globbing pattern to check against device errors | | regexpatternX | (see below) | Regular expression patterns to be used to locate device blocks with errors in the log file | Option Default Description user swift Drop privileges to this user for non-root tasks log_facility LOG_LOCAL0 Syslog log facility log_level INFO Log level device_dir /srv/node Directory devices are mounted under minutes 60 Number of minutes to look back in /var/log/kern.log error_limit 1 Number of errors to find before a device is unmounted logfilepattern /var/log/kern* Location of the log file with globbing pattern to check against device errors regexpatternX (see below) Regular expression patterns to be used to locate device blocks with errors in the log file The default regex pattern used to locate device blocks with errors are berrorb.b(sd[a-z]{1,2}d?)b and b(sd[a-z]{1,2}d?)b.berrorb. One is able to overwrite the default above by providing new expressions using the format regexpatternX = regex_expression, where X is a number. This script has been tested on Ubuntu 10.04 and Ubuntu 12.04, so if you are using a different distro or OS, some care should be taken before using in production. Prevent disk full scenarios by ensuring that the proxy-server blocks PUT requests and rsync prevents replication to the specific drives. You can prevent proxy-server PUT requests to low space disks by ensuring fallocate_reserve is set in account-server.conf, container-server.conf, and object-server.conf. By default, fallocate_reserve is set to 1%. In the object server, this blocks PUT requests that would leave the free disk space below 1% of the disk. In the account and container servers, this blocks operations that will increase account or container database size once the free disk space falls below 1%. Setting fallocate_reserve is highly recommended to avoid filling disks to 100%. When Swifts disks are completely full, all requests involving those disks will fail, including DELETE requests that would otherwise free up space. This is because object deletion includes the creation of a zero-byte tombstone (.ts) to record the time of the deletion for replication purposes; this happens prior to deletion of the objects data. On a completely-full filesystem, that zero-byte .ts file cannot be created, so the DELETE request will fail and the disk will remain completely full. If fallocate_reserve is set, then the filesystem will have enough space to create the zero-byte .ts file, and thus the deletion of the object will succeed and free up some space. In order to prevent rsync replication to specific drives, firstly setup rsync_module per disk in your object-replicator. Set this in object-server.conf: ``` [object-replicator] rsyncmodule = {replicationip}::object_{device} ``` Set the individual drives in rsync.conf. For example: ``` [object_sda] max connections = 4 lock file =" }, { "data": "[object_sdb] max connections = 4 lock file = /var/lock/object_sdb.lock ``` Finally, monitor the disk space of each disk and adjust the rsync max connections per drive to -1. We recommend utilising your existing monitoring solution to achieve this. The following is an example script: ``` import os import errno RESERVE = 500 2 * 20 # 500 MiB DEVICES = '/srv/node1' pathtemplate = '/etc/rsync.d/disable%s.conf' config_template = ''' [object_%s] max connections = -1 ''' def disable_rsync(device): with open(path_template % device, 'w') as f: f.write(config_template.lstrip() % device) def enable_rsync(device): try: os.unlink(path_template % device) except OSError as e: if e.errno != errno.ENOENT: raise for device in os.listdir(DEVICES): path = os.path.join(DEVICES, device) st = os.statvfs(path) free = st.fbavail * st.ffrsize if free < RESERVE: disable_rsync(device) else: enable_rsync(device) ``` For the above script to work, ensure /etc/rsync.d/ conf files are included, by specifying &include in your rsync.conf file: ``` &include /etc/rsync.d ``` Use this in conjunction with a cron job to periodically run the script, for example: ``` root /some/path/to/disable_rsync.py ``` There is a swift-dispersion-report tool for measuring overall cluster health. This is accomplished by checking if a set of deliberately distributed containers and objects are currently in their proper places within the cluster. For instance, a common deployment has three replicas of each object. The health of that object can be measured by checking if each replica is in its proper place. If only 2 of the 3 is in place the objects heath can be said to be at 66.66%, where 100% would be perfect. A single objects health, especially an older object, usually reflects the health of that entire partition the object is in. If we make enough objects on a distinct percentage of the partitions in the cluster, we can get a pretty valid estimate of the overall cluster health. In practice, about 1% partition coverage seems to balance well between accuracy and the amount of time it takes to gather results. The first thing that needs to be done to provide this health value is create a new account solely for this usage. Next, we need to place the containers and objects throughout the system so that they are on distinct partitions. The swift-dispersion-populate tool does this by making up random container and object names until they fall on distinct partitions. Last, and repeatedly for the life of the cluster, we need to run the swift-dispersion-report tool to check the health of each of these containers and objects. These tools need direct access to the entire cluster and to the ring files (installing them on a proxy server will probably do). Both swift-dispersion-populate and swift-dispersion-report use the same configuration file, /etc/swift/dispersion.conf. Example conf file: ``` [dispersion] auth_url = http://localhost:8080/auth/v1.0 auth_user = test:tester auth_key = testing endpoint_type = internalURL ``` There are also options for the conf file for specifying the dispersion coverage (defaults to 1%), retries, concurrency, etc. though usually the defaults are fine. If you want to use keystone v3 for authentication there are options like authversion, userdomainname, projectdomainname and projectname. Once the configuration is in place, run swift-dispersion-populate to populate the containers and objects throughout the cluster. Now that those containers and objects are in place, you can run swift-dispersion-report to get a dispersion report, or the overall health of the cluster. Here is an example of a cluster in perfect health: ``` $ swift-dispersion-report Queried 2621 containers for dispersion reporting, 19s, 0 retries 100.00% of container copies found (7863 of 7863) Sample represents" }, { "data": "of the container partition space Queried 2619 objects for dispersion reporting, 7s, 0 retries 100.00% of object copies found (7857 of 7857) Sample represents 1.00% of the object partition space ``` Now Ill deliberately double the weight of a device in the object ring (with replication turned off) and rerun the dispersion report to show what impact that has: ``` $ swift-ring-builder object.builder set_weight d0 200 $ swift-ring-builder object.builder rebalance ... $ swift-dispersion-report Queried 2621 containers for dispersion reporting, 8s, 0 retries 100.00% of container copies found (7863 of 7863) Sample represents 1.00% of the container partition space Queried 2619 objects for dispersion reporting, 7s, 0 retries There were 1763 partitions missing one copy. 77.56% of object copies found (6094 of 7857) Sample represents 1.00% of the object partition space ``` You can see the health of the objects in the cluster has gone down significantly. Of course, I only have four devices in this test environment, in a production environment with many many devices the impact of one device change is much less. Next, Ill run the replicators to get everything put back into place and then rerun the dispersion report: ``` ... start object replicators and monitor logs until they're caught up ... $ swift-dispersion-report Queried 2621 containers for dispersion reporting, 17s, 0 retries 100.00% of container copies found (7863 of 7863) Sample represents 1.00% of the container partition space Queried 2619 objects for dispersion reporting, 7s, 0 retries 100.00% of object copies found (7857 of 7857) Sample represents 1.00% of the object partition space ``` You can also run the report for only containers or objects: ``` $ swift-dispersion-report --container-only Queried 2621 containers for dispersion reporting, 17s, 0 retries 100.00% of container copies found (7863 of 7863) Sample represents 1.00% of the container partition space $ swift-dispersion-report --object-only Queried 2619 objects for dispersion reporting, 7s, 0 retries 100.00% of object copies found (7857 of 7857) Sample represents 1.00% of the object partition space ``` Alternatively, the dispersion report can also be output in JSON format. This allows it to be more easily consumed by third party utilities: ``` $ swift-dispersion-report -j {\"object\": {\"retries:\": 0, \"missingtwo\": 0, \"copiesfound\": 7863, \"missingone\": 0, \"copiesexpected\": 7863, \"pctfound\": 100.0, \"overlapping\": 0, \"missingall\": 0}, \"container\": {\"retries:\": 0, \"missingtwo\": 0, \"copiesfound\": 12534, \"missingone\": 0, \"copiesexpected\": 12534, \"pctfound\": 100.0, \"overlapping\": 15, \"missingall\": 0}} ``` Note that you may select which storage policy to use by setting the option policy-name silver or -P silver (silver is the example policy name here). If no policy is specified, the default will be used per the swift.conf file. When you specify a policy the containers created also include the policy index, thus even when running a container_only report, you will need to specify the policy not using the default. Swift provides two features that may be used to distribute replicas of objects across multiple geographically distributed data-centers: with Global Clusters object replicas may be dispersed across devices from different data-centers by using regions in ring device descriptors; with Container to Container Synchronization objects may be copied between independent Swift clusters in each data-center. The operation and configuration of each are described in their respective documentation. The following points should be considered when selecting the feature that is most appropriate for a particular use case: Global Clusters allows the distribution of object replicas across data-centers to be controlled by the cluster operator on per-policy basis, since the distribution is determined by the assignment of devices from each data-center in each policys ring" }, { "data": "With Container Sync the end user controls the distribution of objects across clusters on a per-container basis. Global Clusters requires an operator to coordinate ring deployments across multiple data-centers. Container Sync allows for independent management of separate Swift clusters in each data-center, and for existing Swift clusters to be used as peers in Container Sync relationships without deploying new policies/rings. Global Clusters seamlessly supports features that may rely on cross-container operations such as large objects and versioned writes. Container Sync requires the end user to ensure that all required containers are syncd for these features to work in all data-centers. Global Clusters makes objects available for GET or HEAD requests in both data-centers even if a replica of the object has not yet been asynchronously migrated between data-centers, by forwarding requests between data-centers. Container Sync is unable to serve requests for an object in a particular data-center until the asynchronous sync process has copied the object to that data-center. Global Clusters may require less storage capacity than Container Sync to achieve equivalent durability of objects in each data-center. Global Clusters can restore replicas that are lost or corrupted in one data-center using replicas from other data-centers. Container Sync requires each data-center to independently manage the durability of objects, which may result in each data-center storing more replicas than with Global Clusters. Global Clusters execute all account/container metadata updates synchronously to account/container replicas in all data-centers, which may incur delays when making updates across WANs. Container Sync only copies objects between data-centers and all Swift internal traffic is confined to each data-center. Global Clusters does not yet guarantee the availability of objects stored in Erasure Coded policies when one data-center is offline. With Container Sync the availability of objects in each data-center is independent of the state of other data-centers once objects have been synced. Container Sync also allows objects to be stored using different policy types in different data-centers. You can check if handoff partitions are piling up on a server by comparing the expected number of partitions with the actual number on your disks. First get the number of partitions that are currently assigned to a server using the dispersion command from swift-ring-builder: ``` swift-ring-builder sample.builder dispersion --verbose Dispersion is 0.000000, Balance is 0.000000, Overload is 0.00% Required overload is 0.000000% -- Tier Parts % Max 0 1 2 3 -- r1 8192 0.00 2 0 0 8192 0 r1z1 4096 0.00 1 4096 4096 0 0 r1z1-172.16.10.1 4096 0.00 1 4096 4096 0 0 r1z1-172.16.10.1/sda1 4096 0.00 1 4096 4096 0 0 r1z2 4096 0.00 1 4096 4096 0 0 r1z2-172.16.10.2 4096 0.00 1 4096 4096 0 0 r1z2-172.16.10.2/sda1 4096 0.00 1 4096 4096 0 0 r1z3 4096 0.00 1 4096 4096 0 0 r1z3-172.16.10.3 4096 0.00 1 4096 4096 0 0 r1z3-172.16.10.3/sda1 4096 0.00 1 4096 4096 0 0 r1z4 4096 0.00 1 4096 4096 0 0 r1z4-172.16.20.4 4096 0.00 1 4096 4096 0 0 r1z4-172.16.20.4/sda1 4096 0.00 1 4096 4096 0 0 r2 8192 0.00 2 0 8192 0 0 r2z1 4096 0.00 1 4096 4096 0 0 r2z1-172.16.20.1 4096 0.00 1 4096 4096 0 0 r2z1-172.16.20.1/sda1 4096 0.00 1 4096 4096 0 0 r2z2 4096 0.00 1 4096 4096 0 0 r2z2-172.16.20.2 4096 0.00 1 4096 4096 0 0 r2z2-172.16.20.2/sda1 4096 0.00 1 4096 4096 0 0 ``` As you can see from the output, each server should store 4096 partitions, and each region should store 8192 partitions. This example used a partition power of 13 and 3" }, { "data": "With write_affinity enabled it is expected to have a higher number of partitions on disk compared to the value reported by the swift-ring-builder dispersion command. The number of additional (handoff) partitions in region r1 depends on your cluster size, the amount of incoming data as well as the replication speed. Lets use the example from above with 6 nodes in 2 regions, and write_affinity configured to write to region r1 first. swift-ring-builder reported that each node should store 4096 partitions: ``` Expected partitions for region r2: 8192 Handoffs stored across 4 nodes in region r1: 8192 / 4 =2048 Maximum number of partitions on each server in region r1: 2048 + 4096 = 6144 ``` Worst case is that handoff partitions in region 1 are populated with new object replicas faster than replication is able to move them to region 2. In that case you will see ~ 6144 partitions per server in region r1. Your actual number should be lower and between 4096 and 6144 partitions (preferably on the lower side). Now count the number of object partitions on a given server in region 1, for example on 172.16.10.1. Note that the pathnames might be different; /srv/node/ is the default mount location, and objects applies only to storage policy 0 (storage policy 1 would use objects-1 and so on): ``` find -L /srv/node/ -maxdepth 3 -type d -wholename \"objects/\" | wc -l ``` If this number is always on the upper end of the expected partition number range (4096 to 6144) or increasing you should check your replication speed and maybe even disable write_affinity. Please refer to the next section how to collect metrics from Swift, and especially swift-recon -r how to check replication stats. Various metrics and telemetry can be obtained from the account, container, and object servers using the recon server middleware and the swift-recon cli. To do so update your account, container, or object servers pipelines to include recon and add the associated filter config. object-server.conf sample: ``` [pipeline:main] pipeline = recon object-server [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift ``` container-server.conf sample: ``` [pipeline:main] pipeline = recon container-server [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift ``` account-server.conf sample: ``` [pipeline:main] pipeline = recon account-server [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift ``` The reconcachepath simply sets the directory where stats for a few items will be stored. Depending on the method of deployment you may need to create this directory manually and ensure that Swift has read/write access. Finally, if you also wish to track asynchronous pending on your object servers you will need to setup a cronjob to run the swift-recon-cron script periodically on your object servers: ``` /5 * swift /usr/bin/swift-recon-cron /etc/swift/object-server.conf ``` Once the recon middleware is enabled, a GET request for /recon/<metric> to the backend object server will return a JSON-formatted response: ``` fhines@ubuntu:~$ curl -i http://localhost:6230/recon/async HTTP/1.1 200 OK Content-Type: application/json Content-Length: 20 Date: Tue, 18 Oct 2011 21:03:01 GMT {\"async_pending\": 0} ``` Note that the default port for the object server is 6200, except on a Swift All-In-One installation, which uses 6210, 6220, 6230, and 6240. The following metrics and telemetry are currently exposed: | 0 | 1 | |:--|:-| | Request URI | Description | | /recon/load | returns 1,5, and 15 minute load average | | /recon/mem | returns /proc/meminfo | | /recon/mounted | returns ALL currently mounted filesystems | | /recon/unmounted | returns all unmounted drives if mount_check = True | | /recon/diskusage | returns disk utilization for storage devices | | /recon/driveaudit |" }, { "data": "# of drive audit errors | | /recon/ringmd5 | returns object/container/account ring md5sums | | /recon/swiftconfmd5 | returns swift.conf md5sum | | /recon/quarantined | returns # of quarantined objects/accounts/containers | | /recon/sockstat | returns consumable info from /proc/net/sockstat|6 | | /recon/devices | returns list of devices and devices dir i.e. /srv/node | | /recon/async | returns count of async pending | | /recon/replication | returns object replication info (for backward compatibility) | | /recon/replication/<type> | returns replication info for given type (account, container, object) | | /recon/auditor/<type> | returns auditor stats on last reported scan for given type (account, container, object) | | /recon/updater/<type> | returns last updater sweep times for given type (container, object) | | /recon/expirer/object | returns time elapsed and number of objects deleted during last object expirer sweep | | /recon/version | returns Swift version | | /recon/time | returns node time | Request URI Description /recon/load returns 1,5, and 15 minute load average /recon/mem returns /proc/meminfo /recon/mounted returns ALL currently mounted filesystems /recon/unmounted returns all unmounted drives if mount_check = True /recon/diskusage returns disk utilization for storage devices /recon/driveaudit returns # of drive audit errors /recon/ringmd5 returns object/container/account ring md5sums /recon/swiftconfmd5 returns swift.conf md5sum /recon/quarantined returns # of quarantined objects/accounts/containers /recon/sockstat returns consumable info from /proc/net/sockstat|6 /recon/devices returns list of devices and devices dir i.e. /srv/node /recon/async returns count of async pending /recon/replication returns object replication info (for backward compatibility) /recon/replication/<type> returns replication info for given type (account, container, object) /recon/auditor/<type> returns auditor stats on last reported scan for given type (account, container, object) /recon/updater/<type> returns last updater sweep times for given type (container, object) /recon/expirer/object returns time elapsed and number of objects deleted during last object expirer sweep /recon/version returns Swift version /recon/time returns node time Note that objectreplicationlast and objectreplicationtime in object replication info are considered to be transitional and will be removed in the subsequent releases. Use replicationlast and replicationtime instead. This information can also be queried via the swift-recon command line utility: ``` fhines@ubuntu:~$ swift-recon -h Usage: usage: swift-recon <server_type> [-v] [--suppress] [-a] [-r] [-u] [-d] [-R] [-l] [-T] [--md5] [--auditor] [--updater] [--expirer] [--sockstat] <server_type> account|container|object Defaults to object server. ex: swift-recon container -l --auditor Options: -h, --help show this help message and exit -v, --verbose Print verbose info --suppress Suppress most connection related errors -a, --async Get async stats -r, --replication Get replication stats -R, --reconstruction Get reconstruction stats --auditor Get auditor stats --updater Get updater stats --expirer Get expirer stats -u, --unmounted Check cluster for unmounted devices -d, --diskusage Get disk usage stats -l, --loadstats Get cluster load average stats -q, --quarantined Get cluster quarantine stats --md5 Get md5sum of servers ring and compare to local copy --sockstat Get cluster socket usage stats -T, --time Check time synchronization --all Perform all checks. Equal to -arudlqT --md5 --sockstat --auditor --updater --expirer --driveaudit --validate-servers -z ZONE, --zone=ZONE Only query servers in specified zone -t SECONDS, --timeout=SECONDS Time to wait for a response from a server --swiftdir=SWIFTDIR Default = /etc/swift ``` For example, to obtain container replication info from all hosts in zone 3: ``` fhines@ubuntu:~$ swift-recon container -r --zone 3 =============================================================================== --> Starting reconnaissance on 1 hosts =============================================================================== [2012-04-02 02:45:48] Checking on replication [failure] low: 0.000, high: 0.000, avg: 0.000, reported: 1 [success] low: 486.000, high: 486.000, avg: 486.000, reported: 1 [replication_time] low: 20.853, high: 20.853, avg: 20.853, reported: 1 [attempted] low: 243.000, high: 243.000, avg: 243.000, reported: 1 ``` If you have a StatsD server running, Swift may be configured to send it real-time operational" }, { "data": "To enable this, set the following configuration entries (see the sample configuration files): ``` logstatsdhost = localhost logstatsdport = 8125 logstatsddefaultsamplerate = 1.0 logstatsdsampleratefactor = 1.0 logstatsdmetric_prefix = [empty-string] ``` If logstatsdhost is not set, this feature is disabled. The default values for the other settings are given above. The logstatsdhost can be a hostname, an IPv4 address, or an IPv6 address (not surrounded with brackets, as this is unnecessary since the port is specified separately). If a hostname resolves to an IPv4 address, an IPv4 socket will be used to send StatsD UDP packets, even if the hostname would also resolve to an IPv6 address. The sample rate is a real number between 0 and 1 which defines the probability of sending a sample for any given event or timing measurement. This sample rate is sent with each sample to StatsD and used to multiply the value. For example, with a sample rate of 0.5, StatsD will multiply that counters value by 2 when flushing the metric to an upstream monitoring system (Graphite, Ganglia, etc.). Some relatively high-frequency metrics have a default sample rate less than one. If you want to override the default sample rate for all metrics whose default sample rate is not specified in the Swift source, you may set logstatsddefaultsamplerate to a value less than one. This is NOT recommended (see next paragraph). A better way to reduce StatsD load is to adjust logstatsdsampleratefactor to a value less than one. The logstatsdsampleratefactor is multiplied to any sample rate (either the global default or one specified by the actual metric logging call in the Swift source) prior to handling. In other words, this one tunable can lower the frequency of all StatsD logging by a proportional amount. To get the best data, start with the default logstatsddefaultsamplerate and logstatsdsampleratefactor values of 1 and only lower logstatsdsampleratefactor if needed. The logstatsddefaultsamplerate should not be used and remains for backward compatibility only. The metric prefix will be prepended to every metric sent to the StatsD server For example, with: ``` logstatsdmetric_prefix = proxy01 ``` the metric proxy-server.errors would be sent to StatsD as proxy01.proxy-server.errors. This is useful for differentiating different servers when sending statistics to a central StatsD server. If you run a local StatsD server per node, you could configure a per-node metrics prefix there and leave logstatsdmetric_prefix blank. Note that metrics reported to StatsD are counters or timing data (which are sent in units of milliseconds). StatsD usually expands timing data out to min, max, avg, count, and 90th percentile per timing metric, but the details of this behavior will depend on the configuration of your StatsD server. Some important gauge metrics may still need to be collected using another method. For example, the object-server.async_pendings StatsD metric counts the generation of async_pendings in real-time, but will not tell you the current number of async_pending container updates on disk at any point in time. Note also that the set of metrics collected, their names, and their semantics are not locked down and will change over time. For more details, see the service-specific tables listed below: Or, view All Statsd Metrics as one page. When a request is made to Swift, it is given a unique transaction id. This id should be in every log line that has to do with that request. This can be useful when looking at all the services that are hit by a single request. If you need to know where a specific account, container or object is in the cluster, swift-get-nodes will show the location where each replica should" }, { "data": "If you are looking at an object on the server and need more info, swift-object-info will display the account, container, replica locations and metadata of the object. If you are looking at a container on the server and need more info, swift-container-info will display all the information like the account, container, replica locations and metadata of the container. If you are looking at an account on the server and need more info, swift-account-info will display the account, replica locations and metadata of the account. If you want to audit the data for an account, swift-account-audit can be used to crawl the account, checking that all containers and objects can be found. Swift services are generally managed with swift-init. the general usage is swift-init <service> <command>, where service is the Swift service to manage (for example object, container, account, proxy) and command is one of: | 0 | 1 | |:-|:-| | Command | Description | | start | Start the service | | stop | Stop the service | | restart | Restart the service | | shutdown | Attempt to gracefully shutdown the service | | reload | Attempt to gracefully restart the service | | reload-seamless | Attempt to seamlessly restart the service | Command Description start Start the service stop Stop the service restart Restart the service shutdown Attempt to gracefully shutdown the service reload Attempt to gracefully restart the service reload-seamless Attempt to seamlessly restart the service A graceful shutdown or reload will allow all server workers to finish any current requests before exiting. The parent server process exits immediately. A seamless reload will make new configuration settings active, with no window where client requests fail due to there being no active listen socket. The parent server process will re-exec itself, retaining its existing PID. After the re-execed parent server process binds its listen sockets, the old listen sockets are closed and old server workers finish any current requests before exiting. There is also a special case of swift-init all <command>, which will run the command for all swift services. In cases where there are multiple configs for a service, a specific config can be managed with swift-init <service>.<config> <command>. For example, when a separate replication network is used, there might be /etc/swift/object-server/public.conf for the object server and /etc/swift/object-server/replication.conf for the replication services. In this case, the replication services could be restarted with swift-init object-server.replication restart. On system failures, the XFS file system can sometimes truncate files its trying to write and produce zero-byte files. The object-auditor will catch these problems but in the case of a system crash it would be advisable to run an extra, less rate limited sweep to check for these specific files. You can run this command as follows: ``` swift-object-auditor /path/to/object-server/config/file.conf once -z 1000 ``` -z means to only check for zero-byte files at 1000 files per second. At times it is useful to be able to run the object auditor on a specific device or set of devices. You can run the object-auditor as follows: ``` swift-object-auditor /path/to/object-server/config/file.conf once --devices=sda,sdb ``` This will run the object auditor on only the sda and sdb devices. This param accepts a comma separated list of values. At times it is useful to be able to run the object replicator on a specific device or partition. You can run the object-replicator as follows: ``` swift-object-replicator /path/to/object-server/config/file.conf once --devices=sda,sdb ``` This will run the object replicator on only the sda and sdb devices. You can likewise run that command with" }, { "data": "Both params accept a comma separated list of values. If both are specified they will be ANDed together. These can only be run in once mode. Swift Orphans are processes left over after a reload of a Swift server. For example, when upgrading a proxy server you would probably finish with a swift-init proxy-server reload or /etc/init.d/swift-proxy reload. This kills the parent proxy server process and leaves the child processes running to finish processing whatever requests they might be handling at the time. It then starts up a new parent proxy server process and its children to handle new incoming requests. This allows zero-downtime upgrades with no impact to existing requests. The orphaned child processes may take a while to exit, depending on the length of the requests they were handling. However, sometimes an old process can be hung up due to some bug or hardware issue. In these cases, these orphaned processes will hang around forever. swift-orphans can be used to find and kill these orphans. swift-orphans with no arguments will just list the orphans it finds that were started more than 24 hours ago. You shouldnt really check for orphans until 24 hours after you perform a reload, as some requests can take a long time to process. swift-orphans -k TERM will send the SIG_TERM signal to the orphans processes, or you can kill -TERM the pids yourself if you prefer. You can run swift-orphans --help for more options. Swift Oldies are processes that have just been around for a long time. Theres nothing necessarily wrong with this, but it might indicate a hung process if you regularly upgrade and reload/restart services. You might have so many servers that you dont notice when a reload/restart fails; swift-oldies can help with this. For example, if you upgraded and reloaded/restarted everything 2 days ago, and youve already cleaned up any orphans with swift-orphans, you can run swift-oldies -a 48 to find any Swift processes still around that were started more than 2 days ago and then investigate them accordingly. Swift supports setting up custom log handlers for services by specifying a comma-separated list of functions to invoke when logging is setup. It does so via the logcustomhandlers configuration option. Logger hooks invoked are passed the same arguments as Swifts get_logger function (as well as the getLogger and LogAdapter object): | 0 | 1 | |:|:| | Name | Description | | conf | Configuration dict to read settings from | | name | Name of the logger received | | logtoconsole | (optional) Write log messages to console on stderr | | log_route | Route for the logging received | | fmt | Override log format received | | logger | The logging.getLogger object | | adapted_logger | The LogAdapter object | Name Description conf Configuration dict to read settings from name Name of the logger received logtoconsole (optional) Write log messages to console on stderr log_route Route for the logging received fmt Override log format received logger The logging.getLogger object adapted_logger The LogAdapter object A basic example that sets up a custom logger might look like the following: ``` def mylogger(conf, name, logtoconsole, logroute, fmt, logger, adapted_logger): myconfopt = conf.get('somecustomsetting') myhandler = thirdpartylogstorehandler(myconfopt) logger.addHandler(my_handler) ``` See Custom Logger Hooks for sample use cases. Please refer to the security guide at https://docs.openstack.org/security-guide and in particular the Object Storage section. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "index.html#user-guides.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#discoverability.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "middleware.html#dynamic-large-objects.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.account_quotas.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "middleware.html#gatekeeper.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.cname_lookup.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "middleware.html#encryption.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "middleware.html#formpost.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.container_quotas.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.etag_quoter.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.copy.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.crossdomain.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.name_check.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.ratelimit.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.memcache.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.versioned_writes.object_versioning.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.tempauth.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#module-swift.common.middleware.xprofile.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "The name of Dark Data refers to the scientific hypothesis of Dark Matter, which supposes that the universe contains a lot of matter than we cannot observe. The Dark Data in Swift is the name of objects that are not accounted in the containers. The experience of running large scale clusters suggests that Swift does not have any particular bugs that trigger creation of dark data. So, this is an excercise in writing watchers, with a plausible function. When enabled, Dark Data watcher definitely drags down the clusters overall performance. Of course, the load increase can be mitigated as usual, but at the expense of the total time taken by the pass of auditor. Because the watcher only deems an object dark when all container servers agree, it will silently fail to detect anything if even one of container servers in the ring is down or unreacheable. This is done in the interest of operators who run with action=delete. If a container is sharded, there is a small edgecase where an object row could be misplaced. So it is recommended to always start with action=log, before your confident to run action=delete. Finally, keep in mind that Dark Data watcher needs the container ring to operate, but runs on an object node. This can come up if cluster has nodes separated by function. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "middleware.html#static-large-objects.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#staticweb.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html#symlink.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "middleware.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "misc.html#module-swift.common.bufferedhttp.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: DatabaseAuditor Audit containers. alias of ContainerBroker Pluggable Back-ends for Container Server Bases: DatabaseBroker Encapsulates working with a container database. Note that this may involve multiple on-disk DB files if the container becomes sharded: dbfile is the path to the legacy container DB name, i.e. <hash>.db. This file should exist for an initialised broker that has never been sharded, but will not exist once a container has been sharded. db_files is a list of existing db files for the broker. This list should have at least one entry for an initialised broker, and should have two entries while a broker is in SHARDING state. db_file is the path to whichever db is currently authoritative for the container. Depending on the containers state, this may not be the same as the dbfile argument given to init_(), unless forcedbfile is True in which case db_file is always equal to the dbfile argument given to init_(). pendingfile is always equal to db_file extended with .pending, i.e. <hash>.db.pending. Create a ContainerBroker instance. If the db doesnt exist, initialize the db file. device_path device path part partition number account account name string container container name string logger a logger instance epoch a timestamp to include in the db filename put_timestamp initial timestamp if broker needs to be initialized storagepolicyindex the storage policy index a tuple of (broker, initialized) where broker is an instance of swift.container.backend.ContainerBroker and initialized is True if the db file was initialized, False otherwise. Create the container_info table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. Also creates the container_stat view. conn DB connection object put_timestamp put timestamp storagepolicyindex storage policy index Create the object table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object Create policy_stat table. conn DB connection object storagepolicyindex the policy_index the container is being created with Create the shard_range table which is specific to the container DB. conn DB connection object Get the path to the primary db file for this broker. This is typically the db file for the most recent sharding epoch. However, if no db files exist on disk, or if forcedbfile was True when the broker was constructed, then the primary db file is the file passed to the broker constructor. A path to a db file; the file does not necessarily exist. Gets the cached list of valid db files that exist on disk for this broker. reloaddbfiles(). A list of paths to db files ordered by ascending epoch; the list may be empty. Mark an object deleted. name object name to be deleted timestamp timestamp when the object was marked as deleted storagepolicyindex the storage policy index for the object Check if container DB is empty. This method uses more stringent checks on object count than is_deleted(): this method checks that there are no objects in any policy; if the container is in the process of sharding then both fresh and retiring databases are checked to be empty; if a root container has shard ranges then they are checked to be empty. True if the database has no active objects, False otherwise Updates this brokers own shard range with the given epoch, sets its state to SHARDING and persists it in the" }, { "data": "epoch a Timestamp the brokers updated own shard range. Scans the container db for shard ranges. Scanning will start at the upper bound of the any existing_ranges that are given, otherwise at ShardRange.MIN. Scanning will stop when limit shard ranges have been found or when no more shard ranges can be found. In the latter case, the upper bound of the final shard range will be equal to the upper bound of the container namespace. This method does not modify the state of the db; callers are responsible for persisting any shard range data in the db. shard_size the size of each shard range limit the maximum number of shard points to be found; a negative value (default) implies no limit. existing_ranges an optional list of existing ShardRanges; if given, this list should be sorted in order of upper bounds; the scan for new shard ranges will start at the upper bound of the last existing ShardRange. minimumshardsize Minimum size of the final shard range. If this is greater than one then the final shard range may be extended to more than shard_size in order to avoid a further shard range with less minimumshardsize rows. a tuple; the first value in the tuple is a list of dicts each having keys {index, lower, upper, object_count} in order of ascending upper; the second value in the tuple is a boolean which is True if the last shard range has been found, False otherwise. Returns a list of all shard range data, including own shard range and deleted shard ranges. A list of dict representations of a ShardRange. Return a list of brokers for component dbs. The list has two entries while the db state is sharding: the first entry is a broker for the retiring db with skip_commits set to True; the second entry is a broker for the fresh db with skip_commits set to False. For any other db state the list has one entry. a list of ContainerBroker Returns the current state of on disk db files. Get global data for the container. dict with keys: account, container, created_at, puttimestamp, deletetimestamp, status, statuschangedat, objectcount, bytesused, reportedputtimestamp, reporteddeletetimestamp, reportedobjectcount, reportedbytesused, hash, id, xcontainersync_point1, xcontainersyncpoint2, and storagepolicy_index, db_state. Get the is_deleted status and info for the container. a tuple, in the form (info, is_deleted) info is a dict as returned by getinfo and isdeleted is a boolean. Get a list of objects which are in a storage policy different from the containers storage policy. start last reconciler sync point count maximum number of entries to get list of dicts with keys: name, created_at, size, contenttype, etag, storagepolicy_index Returns a list of persisted namespaces per input parameters. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored. reverse reverse the result order. states if specified, restricts the returned list to namespaces that have one of the given states; should be a list of" }, { "data": "fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. a list of Namespace objects. Returns a list of objects, including deleted objects, in all policies. Each object in the list is described by a dict with keys {name, createdat, size, contenttype, etag, deleted, storagepolicyindex}. limit maximum number of entries to get marker if set, objects with names less than or equal to this value will not be included in the list. end_marker if set, objects with names greater than or equal to this value will not be included in the list. include_deleted if True, include only deleted objects; if False, include only undeleted objects; otherwise (default), include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. a list of dicts, each describing an object. Returns a shard range representing this brokers own shard range. If no such range has been persisted in the brokers shard ranges table then a default shard range representing the entire namespace will be returned. The objectcount and bytesused of the returned shard range are not guaranteed to be up-to-date with the current object stats for this broker. Callers that require up-to-date stats should use the get_info method. no_default if True and the brokers own shard range is not found in the shard ranges table then None is returned, otherwise a default shard range is returned. an instance of ShardRange Get information about the DB required for replication. dict containing keys from getinfo plus maxrow and metadata count and metadata is the raw string. Returns a list of persisted shard ranges. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored, but other constraints are applied (e.g. exclude_others and include_deleted). reverse reverse the result order. include_deleted include items that have the delete marker set. states if specified, restricts the returned list to shard ranges that have one of the given states; should be a list of ints. include_own boolean that governs whether the row whose name matches the brokers path is included in the returned list. If True, that row is included unless it is excluded by other constraints (e.g. marker, end_marker, includes). If False, that row is not included. Default is False. exclude_others boolean that governs whether the rows whose names do not match the brokers path are included in the returned list. If True, those rows are not included, otherwise they are included. Default is False. fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. fill_gaps is ignored if includes is" }, { "data": "a list of instances of swift.common.utils.ShardRange. Get the aggregate object stats for all shard ranges in states ACTIVE, SHARDING or SHRINKING. a dict with keys {bytesused, objectcount} Returns sharding specific info from the brokers metadata. key if given the value stored under key in the sharding info will be returned. either a dict of sharding info or the value stored under key in that dict. Returns sharding specific info from the brokers metadata with timestamps. key if given the value stored under key in the sharding info will be returned. a dict of sharding info with their timestamps. This function tells if there is any shard range other than the brokers own shard range, that is not marked as deleted. A boolean value as described above. Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age. Returns True if this container is a root container, False otherwise. A root container is a container that is not a shard of another container. Get a list of objects sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix. limit maximum number of entries to get marker marker query end_marker end marker query prefix prefix query delimiter delimiter for query path if defined, will set the prefix and delimiter based on the path storagepolicyindex storage policy index for query reverse reverse the result order. include_deleted if True, include only deleted objects; if False (default), include only undeleted objects; otherwise, include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. transform_func an optional function that if given will be called for each object to get a transformed version of the object to include in the listing; should have same signature as transformrecord(); defaults to transformrecord(). all_policies if True, include objects for all storage policies ignoring any value given for storagepolicyindex allow_reserved exclude names with reserved-byte by default list of tuples of (name, createdat, size, contenttype, etag, deleted) Turn this db record dict into the format this service uses for pending pickles. Merge items into the object table. itemlist list of dictionaries of {name, createdat, size, content_type, etag, deleted, storagepolicyindex, ctype_timestamp, meta_timestamp} source if defined, update incoming_sync with the source Merge shard ranges into the shard range table. shard_ranges a shard range or a list of shard ranges; each shard range should be an instance of ShardRange or a dict representation of a shard range having SHARDRANGEKEYS. Creates an object in the DB with its metadata. name object name to be created timestamp timestamp of when the object was created size object size content_type object content-type etag object etag deleted if True, marks the object as deleted and sets the deleted_at timestamp to timestamp storagepolicyindex the storage policy index for the object ctypetimestamp timestamp of when contenttype was last updated meta_timestamp timestamp of when metadata was last updated Reloads the cached list of valid on disk db files for this broker. Removes object records in the given namespace range from the object table. Note that objects are removed regardless of their" }, { "data": "lower defines the lower bound of object names that will be removed; names greater than this value will be removed; names less than or equal to this value will not be removed. upper defines the upper bound of object names that will be removed; names less than or equal to this value will be removed; names greater than this value will not be removed. The empty string is interpreted as there being no upper bound. maxrow if specified only rows less than or equal to maxrow will be removed Update reported stats, available with containers get_info. puttimestamp puttimestamp to update deletetimestamp deletetimestamp to update objectcount objectcount to update bytesused bytesused to update Given a list of values each of which may be the name of a state, the number of a state, or an alias, return the set of state numbers described by the list. The following alias values are supported: listing maps to all states that are considered valid when listing objects; updating maps to all states that are considered valid for redirecting an object update; auditing maps to all states that are considered valid for a shard container that is updating its own shard range table from a root (this currently maps to all states except FOUND). states a list of values each of which may be the name of a state, the number of a state, or an alias a set of integer state numbers, or None if no states are given ValueError if any value in the given list is neither a valid state nor a valid alias Unlinks the brokers retiring DB file. True if the retiring DB was successfully unlinked, False otherwise. Creates and initializes a fresh DB file in preparation for sharding a retiring DB. The brokers own shard range must have an epoch timestamp for this method to succeed. True if the fresh DB was successfully created, False otherwise. Updates the brokers metadata stored under the given key prefixed with a sharding specific namespace. key metadata key in the sharding metadata namespace. value metadata value Update the containerstat policyindex and statuschangedat. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated, False otherwise. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated but has not yet completed sharding, False otherwise. Compares sharddata with existing and updates sharddata with any items of existing that take precedence over the corresponding item in shard_data. shard_data a dict representation of shard range that may be modified by this method. existing a dict representation of shard range. True if shard data has any item(s) that are considered to take precedence over the corresponding item in existing Compares new and existing shard ranges, updating the new shard ranges with any more recent state from the existing, and returns shard ranges sorted into those that need adding because they contain new or updated state and those that need deleting because their state has been superseded. newshardranges a list of dicts, each of which represents a shard range. existingshardranges a dict mapping shard range names to dicts representing a shard range. a tuple (toadd, todelete); to_add is a list of dicts, each of which represents a shard range that is to be added to the existing shard ranges; to_delete is a set of shard range names that are to be" }, { "data": "Compare the data and meta related timestamps of a new object item with the timestamps of an existing object record, and update the new item with data and/or meta related attributes from the existing record if their timestamps are newer. The multiple timestamps are encoded into a single string for storing in the created_at column of the objects db table. new_item A dict of object update attributes existing A dict of existing object attributes True if any attributes of the new item dict were found to be newer than the existing and therefore not updated, otherwise False implying that the updated item is equal to the existing. Bases: Replicator alias of ContainerBroker Cleanup non primary database from disk if needed. broker the broker for the database were replicating orig_info snapshot of the broker replication info dict taken before replication responses a list of boolean success values for each replication request to other nodes returns False if deletion of the database was attempted but unsuccessful, otherwise returns True. Ensure that reconciler databases are only cleaned up at the end of the replication run. Look for object rows for objects updates in the wrong storage policy in broker with a ROWID greater than the rowid given as point. broker the container broker with misplaced objects point the last verified reconcilersyncpoint the last successful enqueued rowid Add queue entries for rows in item_list to the local reconciler container database. container the name of the reconciler container item_list the list of rows to enqueue True if successfully enqueued Find a device in the ring that is on this node on which to place a partition. Preference is given to a device that is a primary location for the partition. If no such device is found then a local device with weight is chosen, and failing that any local device. part a partition a node entry from the ring Get a local instance of the reconciler container broker that is appropriate to enqueue the given timestamp. timestamp the timestamp of the row to be enqueued a local reconciler broker Ensure any items merged to reconciler containers during replication are pushed out to correct nodes and any reconciler containers that do not belong on this node are removed. Run a replication pass once. Bases: ReplicatorRpc If broker has ownshardrange with an epoch then filter out an ownshardrange without an epoch, and log a warning about it. shards a list of candidate ShardRanges to merge broker a ContainerBroker logger a logger source string to log as source of shards a list of ShardRanges to actually merge Bases: BaseStorageServer WSGI Controller for the container server. Handle HTTP DELETE request. Handle HTTP GET request. The body of the response to a successful GET request contains a listing of either objects or shard ranges. The exact content of the listing is determined by a combination of request headers and query string parameters, as follows: The type of the listing is determined by the X-Backend-Record-Type header. If this header has value shard then the response body will be a list of shard ranges; if this header has value auto, and the container state is sharding or sharded, then the listing will be a list of shard ranges; otherwise the response body will be a list of objects. Both shard range and object listings may be filtered according to the constraints described" }, { "data": "However, the X-Backend-Ignore-Shard-Name-Filter header may be used to override the application of the marker, end_marker, includes and reverse parameters to shard range listings. These parameters will be ignored if the header has the value sharded and the current db sharding state is also sharded. Note that this header does not override the states constraint on shard range listings. The order of both shard range and object listings may be reversed by using a reverse query string parameter with a value in swift.common.utils.TRUE_VALUES. Both shard range and object listings may be constrained to a name range by the marker and end_marker query string parameters. Object listings will only contain objects whose names are greater than any marker value and less than any end_marker value. Shard range listings will only contain shard ranges whose namespace is greater than or includes any marker value and is less than or includes any end_marker value. Shard range listings may also be constrained by an includes query string parameter. If this parameter is present the listing will only contain shard ranges whose namespace includes the value of the parameter; any marker or end_marker parameters are ignored The length of an object listing may be constrained by the limit parameter. Object listings may also be constrained by prefix, delimiter and path query string parameters. Shard range listings will include deleted shard ranges if and only if the X-Backend-Include-Deleted header value is one of swift.common.utils.TRUE_VALUES. Object listings never include deleted objects. Shard range listings may be constrained to include only shard ranges whose state is specified by a query string states parameter. If present, the states parameter should be a comma separated list of either the string or integer representation of STATES. Alias values may be used in a states parameter value. The listing alias will cause the listing to include all shard ranges in a state suitable for contributing to an object listing. The updating alias will cause the listing to include all shard ranges in a state suitable to accept an object update. If either of these aliases is used then the shard range listing will if necessary be extended with a synthesised filler range in order to satisfy the requested name range when insufficient actual shard ranges are found. Any filler shard range will cover the otherwise uncovered tail of the requested name range and will point back to the same container. The auditing alias will cause the listing to include all shard ranges in a state useful to the sharder while auditing a shard container. This alias will not cause a filler range to be added, but will cause the containers own shard range to be included in the listing. For now, auditing is only supported when X-Backend-Record-Shard-Format is full. Shard range listings can be simplified to include only Namespace only attributes (name, lower and upper) if the caller send the header X-Backend-Record-Shard-Format with value namespace as a hint that it would prefer namespaces. If this header doesnt exist or the value is full, the listings will default to include all attributes of shard ranges. But if params has includes/marker/end_marker then the response will be full shard ranges, regardless the header of X-Backend-Record-Shard-Format. The response header X-Backend-Record-Type will tell the user what type it gets back. Listings are not normally returned from a deleted container. However, the X-Backend-Override-Deleted header may be used with a value in swift.common.utils.TRUE_VALUES to force a shard range listing to be returned from a deleted container whose DB file still" }, { "data": "req an instance of swift.common.swob.Request an instance of swift.common.swob.Response Returns a list of objects in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Returns a list of persisted shard ranges or namespaces in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Handle HTTP HEAD request. Handle HTTP POST request. A POST request will update the containers put_timestamp, unless it has an X-Backend-No-Timestamp-Update header with a truthy value. req an instance of Request. Handle HTTP PUT request. Update or create container. Put object into container. Put shards into container. Handle HTTP REPLICATE request (json-encoded RPC calls for replication.) Handle HTTP UPDATE request (merge_items RPCs coming from the proxy.) Update the account server(s) with latest container info. req swob.Request object account account name container container name broker container DB broker object if all the account requests return a 404 error code, HTTPNotFound response object, if the account cannot be updated due to a malformed header, an HTTPBadRequest response object, otherwise None. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf Validate that the index supplied maps to a policy. policy index from request, or None if not present HTTPBadRequest if the supplied index is bogus ContainerSyncCluster instance for validating sync-to values. Perform mutation to container listing records that are common to all serialization formats, and returns it as a dict. Converts created time to iso timestamp. Replaces size with swift_bytes content type parameter. record object entry record modified record Return the shard_range database record as a dict, the keys will depend on the database fields provided in the record. record shard entry record, either ShardRange or Namespace. shardrecordfull boolean, when true the timestamp field is added as last_modified in iso format. dict suitable for listing responses paste.deploy app factory for creating WSGI container server apps Convert container info dict to headers. Split and validate path for a container. req a swob request a tuple of path parts as strings Split and validate path for an object. req a swob request a tuple of path parts as strings Bases: Daemon Move objects that are in the wrong storage policy. Validate source object will satisfy the misplaced object queue entry and move to destination. qpolicyindex the policy_index for the source object account the account name of the misplaced object container the container name of the misplaced object obj the name of the misplaced object q_ts the timestamp of the misplaced object path the full path of the misplaced object for logging containerpolicyindex the policy_index of the destination source_ts the timestamp of the source object sourceobjstatus the HTTP status source object request sourceobjinfo the HTTP headers of the source object request sourceobjiter the body iter of the source object request Issue a DELETE request against the destination to match the misplaced DELETE against the source. Dump stats to logger, noop when stats have been already been logged in the last minute. Issue a delete object request to the container for the misplaced object queue" }, { "data": "container the misplaced objects container obj the name of the misplaced object q_ts the timestamp of the misplaced object q_record the timestamp of the queue entry N.B. qts will normally be the same time as qrecord except when an object was manually re-enqued. Process an entry and remove from queue on success. q_container the queue container qentry the rawobj name from the q_container queue_item a parsed entry from the queue Main entry point for concurrent processing of misplaced objects. Iterate over all queue entries and delegate processing to spawned workers in the pool. Process a possibly misplaced object write request. Determine correct destination storage policy by checking with primary containers. Check source and destination, copying or deleting into destination and cleaning up the source as needed. This method wraps reconcileobject for exception handling. info a queue entry dict True to indicate the request is fully processed successfully, otherwise False. Override this to run forever Process every entry in the queue. Check if a given entry should be handled by this process. container the queue container queue_item an entry from the queue Update stats tracking for metric and emit log message. Issue a delete object request to the given storage_policy. account the account name container the container name obj the object name timestamp the timestamp of the object to delete policy_index the policy index to direct the request path the path to be used for logging Add an object to the container reconcilers queue. This will cause the container reconciler to move it from its current storage policy index to the correct storage policy index. container_ring container ring account the misplaced objects account container the misplaced objects container obj the misplaced object objpolicyindex the policy index where the misplaced object currently is obj_timestamp the misplaced objects X-Timestamp. We need this to ensure that the reconciler doesnt overwrite a newer object with an older one. op the method of the operation (DELETE or PUT) force over-write queue entries newer than obj_timestamp conn_timeout max time to wait for connection to container server response_timeout max time to wait for response from container server .misplaced_object container name, False on failure. Success means a majority of containers got the update. You have to squint to see it, but the general strategy is just: return the newest (of the recreated) return the oldest I tried cleaning it up for awhile, but settled on just writing a bunch of tests instead. Once you get an intuitive sense for the nuance here you can try and see theres a better way to spell the boolean logic but it all ends up looking sorta hairy. -1 if info is correct, 1 if remote_info is better Talk directly to the primary container servers to delete a particular object listing. Does not talk to object servers; use this only when a container entry does not actually have a corresponding object. Get the name of a container into which a misplaced object should be enqueued. The name is the objects last modified time rounded down to the nearest hour. objtimestamp a string representation of the objects createdat time from its container db row. a container name Compare remote_info to info and decide if the remote storage policy index should be used instead of ours. Translate a reconciler container listing entry to a dictionary containing the parts of the misplaced object queue" }, { "data": "obj_info an entry in an a container listing with the required keys: name, content_type, and hash a queue entry dict with the keys: qpolicyindex, account, container, obj, qop, qts, q_record, and path Bases: object Encapsulates metadata associated with the process of cleaving a retiring DB. This metadata includes: ref: The unique part of the key that is used when persisting a serialized CleavingContext as sysmeta in the DB. The unique part of the key is based off the DB id. This ensures that each context is associated with a specific DB file. The unique part of the key is included in the CleavingContext but should not be modified by any caller. cursor: the upper bound of the last shard range to have been cleaved from the retiring DB. max_row: the retiring DBs max row; this is updated to the value of the retiring DBs max_row every time a CleavingContext is loaded for that DB, and may change during the process of cleaving the DB. cleavetorow: the value of max_row at the moment when cleaving starts for the DB. When cleaving completes (i.e. the cleave cursor has reached the upper bound of the cleaving namespace), cleavetorow is compared to the current max_row: if the two values are not equal then rows have been added to the DB which may not have been cleaved, in which case the CleavingContext is reset and cleaving is re-started. lastcleaveto_row: the minimum DB row from which cleaving should select objects to cleave; this is initially set to None i.e. all rows should be cleaved. If the CleavingContext is reset then the lastcleaveto_row is set to the current value of cleavetorow, which in turn is set to the current value of max_row by a subsequent call to start. The repeated cleaving therefore only selects objects in rows greater than the lastcleaveto_row, rather than cleaving the whole DB again. ranges_done: the number of shard ranges that have been cleaved from the retiring DB. ranges_todo: the number of shard ranges that are yet to be cleaved from the retiring DB. Returns a CleavingContext tracking the cleaving progress of the given brokers DB. broker an instances of ContainerBroker An instance of CleavingContext. Returns all cleaving contexts stored in the brokers DB. broker an instance of ContainerBroker list of tuples of (CleavingContext, timestamp) Persists the serialized CleavingContext as sysmeta in the given brokers DB. broker an instances of ContainerBroker Bases: ContainerSharderConf, ContainerReplicator Shards containers. Run the container sharder until stopped. Run the container sharder once. Iterates through all object rows in srcshardrange in name order yielding them in lists of up to batch_size in length. All batches of rows that are not marked deleted are yielded before all batches of rows that are marked deleted. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. since_row include only object rows whose ROWID is greater than the given row id; by default all object rows are included. batch_size The maximum number of object rows to include in each yielded batch; defaults to cleaverowbatch_size. a generator of tuples of (list of rows, broker info dict) Iterates through all object rows in srcshardrange to place them in destination shard ranges provided by the destshardranges function. Yields tuples of (batch of object rows, destination shard range in which those object rows belong, broker" }, { "data": "If no destination shard range exists for a batch of object rows then tuples are yielded of (batch of object rows, None, broker info). This indicates to the caller that there are a non-zero number of object rows for which no destination shard range was found. Note that the same destination shard range may be referenced in more than one yielded tuple. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. destshardranges A function which should return a list of destination shard ranges sorted in the order defined by sort_key(). a generator of tuples of (object row list, shard range, broker info dict) where shard_range may be None. Bases: object Combines new and existing shard ranges based on most recent state. newshardranges a list of ShardRange instances. existingshardranges a list of ShardRange instances. a list of ShardRange instances. Update donor shard ranges to shrinking state and merge donors and acceptors to broker. broker A ContainerBroker. acceptor_ranges A list of ShardRange that are to be acceptors. donor_ranges A list of ShardRange that are to be donors; these will have their state and timestamp updated. timestamp timestamp to use when updating donor state Find sequences of shard ranges that could be compacted into a single acceptor shard range. This function does not modify shard ranges. broker A ContainerBroker. shrink_threshold the number of rows below which a shard may be considered for shrinking into another shard expansion_limit the maximum number of rows that an acceptor shard range should have after other shard ranges have been compacted into it max_shrinking the maximum number of shard ranges that should be compacted into each acceptor; -1 implies unlimited. max_expanding the maximum number of acceptors to be found (i.e. the maximum number of sequences to be returned); -1 implies unlimited. include_shrinking if True then existing compactible sequences are included in the results; default is False. A list of ShardRangeList each containing a sequence of neighbouring shard ranges that may be compacted; the final shard range in the list is the acceptor Find all pairs of overlapping ranges in the given list. shard_ranges A list of ShardRange excludeparentchild If True then overlapping pairs that have a parent-child relationship within the past time period time_period are excluded from the returned set. Default is False. time_period the specified past time period in seconds. Value of 0 means all time in the past. a set of tuples, each tuple containing ranges that overlap with each other. Returns a list of all continuous paths through the shard ranges. An individual path may not necessarily span the entire namespace, but it will span a continuous namespace without gaps. shard_ranges A list of ShardRange. A list of ShardRangeList. Find gaps in the shard ranges and pairs of shard range paths that lead to and from those gaps. For each gap a single pair of adjacent paths is selected. The concatenation of all selected paths and gaps will span the entire namespace with no overlaps. shard_ranges a list of instances of ShardRange. within_range an optional ShardRange that constrains the search space; the method will only return gaps within this range. The default is the entire namespace. A list of tuples of (startpath, gaprange, end_path) where start_path is a list of ShardRanges leading to the gap, gap_range is a ShardRange synthesized to describe the namespace gap, and end_path is a list of ShardRanges leading from the" }, { "data": "When gaps start or end at the namespace minimum or maximum bounds, startpath and endpath may be null paths that contain a single ShardRange covering either the minimum or maximum of the namespace. Transform the given sequences of shard ranges into a list of acceptors and a list of shrinking donors. For each given sequence the final ShardRange in the sequence (the acceptor) is expanded to accommodate the other ShardRanges in the sequence (the donors). The donors and acceptors are then merged into the broker. broker A ContainerBroker. sequences A list of ShardRangeList Sorts the given list of paths such that the most preferred path is the first item in the list. paths A list of ShardRangeList. shardrangeto_span An instance of ShardRange that describes the namespace that would ideally be spanned by a path. Paths that include this namespace will be preferred over those that do not. A sorted list of ShardRangeList. Update the ownshardrange with the up-to-date object stats from the broker. Note: this method does not persist the updated ownshardrange; callers should use broker.mergeshardranges if the updated stats need to be persisted. broker an instance of ContainerBroker. ownshardrange and instance of ShardRange. ownshardrange with up-to-date object_count and bytes_used. Bases: Daemon Daemon to sync syncable containers. This is done by scanning the local devices for container databases and checking for x-container-sync-to and x-container-sync-key metadata values. If they exist, newer rows since the last sync will trigger PUTs or DELETEs to the other container. The actual syncing is slightly more complicated to make use of the three (or number-of-replicas) main nodes for a container without each trying to do the exact same work but also without missing work if one node happens to be down. Two sync points are kept per container database. All rows between the two sync points trigger updates. Any rows newer than both sync points cause updates depending on the nodes position for the container (primary nodes do one third, etc. depending on the replica count of course). After a sync run, the first sync point is set to the newest ROWID known and the second sync point is set to newest ROWID for which all updates have been sent. An example may help. Assume replica count is 3 and perfectly matching ROWIDs starting at 1. First sync run, database has 6 rows: SyncPoint1 starts as -1. SyncPoint2 starts as -1. No rows between points, so no all updates rows. Six rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node 3. SyncPoint1 is set as 6 (the newest ROWID known). SyncPoint2 is left as -1 since no all updates rows were synced. Next sync run, database has 12 rows: SyncPoint1 starts as 6. SyncPoint2 starts as -1. The rows between -1 and 6 all trigger updates (most of which should short-circuit on the remote end as having already been done). Six more rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node SyncPoint1 is set as 12 (the newest ROWID known). SyncPoint2 is set as 6 (the newest all updates ROWID). In this way, under normal circumstances each node sends its share of updates each run and just sends a batch of older updates to ensure nothing was missed. conf The dict of configuration values from the [container-sync] section of the" }, { "data": "containerring If None, the <swiftdir>/container.ring.gz will be loaded. This is overridden by unit tests. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf The dict of configuration values from the [container-sync] section of the container-server.conf. Number of successful DELETEs triggered. Number of containers that had a failure of some type. Number of successful PUTs triggered. swift.common.ring.Ring for locating containers. Number of containers whose sync has been turned off, but are not yet cleared from the sync store. Per container stats. These are collected per container. puts - the number of puts that were done for the container deletes - the number of deletes that were fot the container bytes - the total number of bytes transferred per the container Checks the given path for a container database, determines if syncing is turned on for that database and, if so, sends any updates to the other container. path the path to a container db Sends the update the row indicates to the sync_to container. Update can be either delete or put. row The updated row in the local database triggering the sync update. sync_to The URL to the remote container. user_key The X-Container-Sync-Key to use when sending requests to the other container. broker The local container database broker. info The get_info result from the local container database broker. realm The realm from self.realms_conf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. realmkey The realm key from self.realmsconf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. True on success Number of containers with sync turned on that were successfully synced. Maximum amount of time to spend syncing a container before moving on to the next one. If a container sync hasnt finished in this time, itll just be resumed next scan. Path to the local device mount points. Minimum time between full scans. This is to keep the daemon from running wild on near empty systems. Logger to use for container-sync log lines. Indicates whether mount points should be verified as actual mount points (normally true, false for tests and SAIO). ContainerSyncCluster instance for validating sync-to values. Writes a report of the stats to the logger and resets the stats for the next report. Time of last stats report. Runs container sync scans until stopped. Runs a single container sync scan. ContainerSyncStore instance for iterating over synced containers Bases: Daemon Update container information in account listings. Report container info to an account server. node node dictionary from the account ring part partition the account is on container container name put_timestamp put timestamp delete_timestamp delete timestamp count object count in the container bytes bytes used in the container storagepolicyindex the policy index for the container Walk the path looking for container DBs and process them. path path to walk Get the account ring. Load it if it hasnt been yet. Get paths to all of the partitions on each drive to be processed. a list of paths Process a container, and update the information in the account. dbfile container DB to process Run the updater continuously. Run the updater once. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "misc.html#acls.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: DatabaseAuditor Audit containers. alias of ContainerBroker Pluggable Back-ends for Container Server Bases: DatabaseBroker Encapsulates working with a container database. Note that this may involve multiple on-disk DB files if the container becomes sharded: dbfile is the path to the legacy container DB name, i.e. <hash>.db. This file should exist for an initialised broker that has never been sharded, but will not exist once a container has been sharded. db_files is a list of existing db files for the broker. This list should have at least one entry for an initialised broker, and should have two entries while a broker is in SHARDING state. db_file is the path to whichever db is currently authoritative for the container. Depending on the containers state, this may not be the same as the dbfile argument given to init_(), unless forcedbfile is True in which case db_file is always equal to the dbfile argument given to init_(). pendingfile is always equal to db_file extended with .pending, i.e. <hash>.db.pending. Create a ContainerBroker instance. If the db doesnt exist, initialize the db file. device_path device path part partition number account account name string container container name string logger a logger instance epoch a timestamp to include in the db filename put_timestamp initial timestamp if broker needs to be initialized storagepolicyindex the storage policy index a tuple of (broker, initialized) where broker is an instance of swift.container.backend.ContainerBroker and initialized is True if the db file was initialized, False otherwise. Create the container_info table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. Also creates the container_stat view. conn DB connection object put_timestamp put timestamp storagepolicyindex storage policy index Create the object table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object Create policy_stat table. conn DB connection object storagepolicyindex the policy_index the container is being created with Create the shard_range table which is specific to the container DB. conn DB connection object Get the path to the primary db file for this broker. This is typically the db file for the most recent sharding epoch. However, if no db files exist on disk, or if forcedbfile was True when the broker was constructed, then the primary db file is the file passed to the broker constructor. A path to a db file; the file does not necessarily exist. Gets the cached list of valid db files that exist on disk for this broker. reloaddbfiles(). A list of paths to db files ordered by ascending epoch; the list may be empty. Mark an object deleted. name object name to be deleted timestamp timestamp when the object was marked as deleted storagepolicyindex the storage policy index for the object Check if container DB is empty. This method uses more stringent checks on object count than is_deleted(): this method checks that there are no objects in any policy; if the container is in the process of sharding then both fresh and retiring databases are checked to be empty; if a root container has shard ranges then they are checked to be empty. True if the database has no active objects, False otherwise Updates this brokers own shard range with the given epoch, sets its state to SHARDING and persists it in the" }, { "data": "epoch a Timestamp the brokers updated own shard range. Scans the container db for shard ranges. Scanning will start at the upper bound of the any existing_ranges that are given, otherwise at ShardRange.MIN. Scanning will stop when limit shard ranges have been found or when no more shard ranges can be found. In the latter case, the upper bound of the final shard range will be equal to the upper bound of the container namespace. This method does not modify the state of the db; callers are responsible for persisting any shard range data in the db. shard_size the size of each shard range limit the maximum number of shard points to be found; a negative value (default) implies no limit. existing_ranges an optional list of existing ShardRanges; if given, this list should be sorted in order of upper bounds; the scan for new shard ranges will start at the upper bound of the last existing ShardRange. minimumshardsize Minimum size of the final shard range. If this is greater than one then the final shard range may be extended to more than shard_size in order to avoid a further shard range with less minimumshardsize rows. a tuple; the first value in the tuple is a list of dicts each having keys {index, lower, upper, object_count} in order of ascending upper; the second value in the tuple is a boolean which is True if the last shard range has been found, False otherwise. Returns a list of all shard range data, including own shard range and deleted shard ranges. A list of dict representations of a ShardRange. Return a list of brokers for component dbs. The list has two entries while the db state is sharding: the first entry is a broker for the retiring db with skip_commits set to True; the second entry is a broker for the fresh db with skip_commits set to False. For any other db state the list has one entry. a list of ContainerBroker Returns the current state of on disk db files. Get global data for the container. dict with keys: account, container, created_at, puttimestamp, deletetimestamp, status, statuschangedat, objectcount, bytesused, reportedputtimestamp, reporteddeletetimestamp, reportedobjectcount, reportedbytesused, hash, id, xcontainersync_point1, xcontainersyncpoint2, and storagepolicy_index, db_state. Get the is_deleted status and info for the container. a tuple, in the form (info, is_deleted) info is a dict as returned by getinfo and isdeleted is a boolean. Get a list of objects which are in a storage policy different from the containers storage policy. start last reconciler sync point count maximum number of entries to get list of dicts with keys: name, created_at, size, contenttype, etag, storagepolicy_index Returns a list of persisted namespaces per input parameters. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored. reverse reverse the result order. states if specified, restricts the returned list to namespaces that have one of the given states; should be a list of" }, { "data": "fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. a list of Namespace objects. Returns a list of objects, including deleted objects, in all policies. Each object in the list is described by a dict with keys {name, createdat, size, contenttype, etag, deleted, storagepolicyindex}. limit maximum number of entries to get marker if set, objects with names less than or equal to this value will not be included in the list. end_marker if set, objects with names greater than or equal to this value will not be included in the list. include_deleted if True, include only deleted objects; if False, include only undeleted objects; otherwise (default), include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. a list of dicts, each describing an object. Returns a shard range representing this brokers own shard range. If no such range has been persisted in the brokers shard ranges table then a default shard range representing the entire namespace will be returned. The objectcount and bytesused of the returned shard range are not guaranteed to be up-to-date with the current object stats for this broker. Callers that require up-to-date stats should use the get_info method. no_default if True and the brokers own shard range is not found in the shard ranges table then None is returned, otherwise a default shard range is returned. an instance of ShardRange Get information about the DB required for replication. dict containing keys from getinfo plus maxrow and metadata count and metadata is the raw string. Returns a list of persisted shard ranges. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored, but other constraints are applied (e.g. exclude_others and include_deleted). reverse reverse the result order. include_deleted include items that have the delete marker set. states if specified, restricts the returned list to shard ranges that have one of the given states; should be a list of ints. include_own boolean that governs whether the row whose name matches the brokers path is included in the returned list. If True, that row is included unless it is excluded by other constraints (e.g. marker, end_marker, includes). If False, that row is not included. Default is False. exclude_others boolean that governs whether the rows whose names do not match the brokers path are included in the returned list. If True, those rows are not included, otherwise they are included. Default is False. fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. fill_gaps is ignored if includes is" }, { "data": "a list of instances of swift.common.utils.ShardRange. Get the aggregate object stats for all shard ranges in states ACTIVE, SHARDING or SHRINKING. a dict with keys {bytesused, objectcount} Returns sharding specific info from the brokers metadata. key if given the value stored under key in the sharding info will be returned. either a dict of sharding info or the value stored under key in that dict. Returns sharding specific info from the brokers metadata with timestamps. key if given the value stored under key in the sharding info will be returned. a dict of sharding info with their timestamps. This function tells if there is any shard range other than the brokers own shard range, that is not marked as deleted. A boolean value as described above. Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age. Returns True if this container is a root container, False otherwise. A root container is a container that is not a shard of another container. Get a list of objects sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix. limit maximum number of entries to get marker marker query end_marker end marker query prefix prefix query delimiter delimiter for query path if defined, will set the prefix and delimiter based on the path storagepolicyindex storage policy index for query reverse reverse the result order. include_deleted if True, include only deleted objects; if False (default), include only undeleted objects; otherwise, include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. transform_func an optional function that if given will be called for each object to get a transformed version of the object to include in the listing; should have same signature as transformrecord(); defaults to transformrecord(). all_policies if True, include objects for all storage policies ignoring any value given for storagepolicyindex allow_reserved exclude names with reserved-byte by default list of tuples of (name, createdat, size, contenttype, etag, deleted) Turn this db record dict into the format this service uses for pending pickles. Merge items into the object table. itemlist list of dictionaries of {name, createdat, size, content_type, etag, deleted, storagepolicyindex, ctype_timestamp, meta_timestamp} source if defined, update incoming_sync with the source Merge shard ranges into the shard range table. shard_ranges a shard range or a list of shard ranges; each shard range should be an instance of ShardRange or a dict representation of a shard range having SHARDRANGEKEYS. Creates an object in the DB with its metadata. name object name to be created timestamp timestamp of when the object was created size object size content_type object content-type etag object etag deleted if True, marks the object as deleted and sets the deleted_at timestamp to timestamp storagepolicyindex the storage policy index for the object ctypetimestamp timestamp of when contenttype was last updated meta_timestamp timestamp of when metadata was last updated Reloads the cached list of valid on disk db files for this broker. Removes object records in the given namespace range from the object table. Note that objects are removed regardless of their" }, { "data": "lower defines the lower bound of object names that will be removed; names greater than this value will be removed; names less than or equal to this value will not be removed. upper defines the upper bound of object names that will be removed; names less than or equal to this value will be removed; names greater than this value will not be removed. The empty string is interpreted as there being no upper bound. maxrow if specified only rows less than or equal to maxrow will be removed Update reported stats, available with containers get_info. puttimestamp puttimestamp to update deletetimestamp deletetimestamp to update objectcount objectcount to update bytesused bytesused to update Given a list of values each of which may be the name of a state, the number of a state, or an alias, return the set of state numbers described by the list. The following alias values are supported: listing maps to all states that are considered valid when listing objects; updating maps to all states that are considered valid for redirecting an object update; auditing maps to all states that are considered valid for a shard container that is updating its own shard range table from a root (this currently maps to all states except FOUND). states a list of values each of which may be the name of a state, the number of a state, or an alias a set of integer state numbers, or None if no states are given ValueError if any value in the given list is neither a valid state nor a valid alias Unlinks the brokers retiring DB file. True if the retiring DB was successfully unlinked, False otherwise. Creates and initializes a fresh DB file in preparation for sharding a retiring DB. The brokers own shard range must have an epoch timestamp for this method to succeed. True if the fresh DB was successfully created, False otherwise. Updates the brokers metadata stored under the given key prefixed with a sharding specific namespace. key metadata key in the sharding metadata namespace. value metadata value Update the containerstat policyindex and statuschangedat. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated, False otherwise. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated but has not yet completed sharding, False otherwise. Compares sharddata with existing and updates sharddata with any items of existing that take precedence over the corresponding item in shard_data. shard_data a dict representation of shard range that may be modified by this method. existing a dict representation of shard range. True if shard data has any item(s) that are considered to take precedence over the corresponding item in existing Compares new and existing shard ranges, updating the new shard ranges with any more recent state from the existing, and returns shard ranges sorted into those that need adding because they contain new or updated state and those that need deleting because their state has been superseded. newshardranges a list of dicts, each of which represents a shard range. existingshardranges a dict mapping shard range names to dicts representing a shard range. a tuple (toadd, todelete); to_add is a list of dicts, each of which represents a shard range that is to be added to the existing shard ranges; to_delete is a set of shard range names that are to be" }, { "data": "Compare the data and meta related timestamps of a new object item with the timestamps of an existing object record, and update the new item with data and/or meta related attributes from the existing record if their timestamps are newer. The multiple timestamps are encoded into a single string for storing in the created_at column of the objects db table. new_item A dict of object update attributes existing A dict of existing object attributes True if any attributes of the new item dict were found to be newer than the existing and therefore not updated, otherwise False implying that the updated item is equal to the existing. Bases: Replicator alias of ContainerBroker Cleanup non primary database from disk if needed. broker the broker for the database were replicating orig_info snapshot of the broker replication info dict taken before replication responses a list of boolean success values for each replication request to other nodes returns False if deletion of the database was attempted but unsuccessful, otherwise returns True. Ensure that reconciler databases are only cleaned up at the end of the replication run. Look for object rows for objects updates in the wrong storage policy in broker with a ROWID greater than the rowid given as point. broker the container broker with misplaced objects point the last verified reconcilersyncpoint the last successful enqueued rowid Add queue entries for rows in item_list to the local reconciler container database. container the name of the reconciler container item_list the list of rows to enqueue True if successfully enqueued Find a device in the ring that is on this node on which to place a partition. Preference is given to a device that is a primary location for the partition. If no such device is found then a local device with weight is chosen, and failing that any local device. part a partition a node entry from the ring Get a local instance of the reconciler container broker that is appropriate to enqueue the given timestamp. timestamp the timestamp of the row to be enqueued a local reconciler broker Ensure any items merged to reconciler containers during replication are pushed out to correct nodes and any reconciler containers that do not belong on this node are removed. Run a replication pass once. Bases: ReplicatorRpc If broker has ownshardrange with an epoch then filter out an ownshardrange without an epoch, and log a warning about it. shards a list of candidate ShardRanges to merge broker a ContainerBroker logger a logger source string to log as source of shards a list of ShardRanges to actually merge Bases: BaseStorageServer WSGI Controller for the container server. Handle HTTP DELETE request. Handle HTTP GET request. The body of the response to a successful GET request contains a listing of either objects or shard ranges. The exact content of the listing is determined by a combination of request headers and query string parameters, as follows: The type of the listing is determined by the X-Backend-Record-Type header. If this header has value shard then the response body will be a list of shard ranges; if this header has value auto, and the container state is sharding or sharded, then the listing will be a list of shard ranges; otherwise the response body will be a list of objects. Both shard range and object listings may be filtered according to the constraints described" }, { "data": "However, the X-Backend-Ignore-Shard-Name-Filter header may be used to override the application of the marker, end_marker, includes and reverse parameters to shard range listings. These parameters will be ignored if the header has the value sharded and the current db sharding state is also sharded. Note that this header does not override the states constraint on shard range listings. The order of both shard range and object listings may be reversed by using a reverse query string parameter with a value in swift.common.utils.TRUE_VALUES. Both shard range and object listings may be constrained to a name range by the marker and end_marker query string parameters. Object listings will only contain objects whose names are greater than any marker value and less than any end_marker value. Shard range listings will only contain shard ranges whose namespace is greater than or includes any marker value and is less than or includes any end_marker value. Shard range listings may also be constrained by an includes query string parameter. If this parameter is present the listing will only contain shard ranges whose namespace includes the value of the parameter; any marker or end_marker parameters are ignored The length of an object listing may be constrained by the limit parameter. Object listings may also be constrained by prefix, delimiter and path query string parameters. Shard range listings will include deleted shard ranges if and only if the X-Backend-Include-Deleted header value is one of swift.common.utils.TRUE_VALUES. Object listings never include deleted objects. Shard range listings may be constrained to include only shard ranges whose state is specified by a query string states parameter. If present, the states parameter should be a comma separated list of either the string or integer representation of STATES. Alias values may be used in a states parameter value. The listing alias will cause the listing to include all shard ranges in a state suitable for contributing to an object listing. The updating alias will cause the listing to include all shard ranges in a state suitable to accept an object update. If either of these aliases is used then the shard range listing will if necessary be extended with a synthesised filler range in order to satisfy the requested name range when insufficient actual shard ranges are found. Any filler shard range will cover the otherwise uncovered tail of the requested name range and will point back to the same container. The auditing alias will cause the listing to include all shard ranges in a state useful to the sharder while auditing a shard container. This alias will not cause a filler range to be added, but will cause the containers own shard range to be included in the listing. For now, auditing is only supported when X-Backend-Record-Shard-Format is full. Shard range listings can be simplified to include only Namespace only attributes (name, lower and upper) if the caller send the header X-Backend-Record-Shard-Format with value namespace as a hint that it would prefer namespaces. If this header doesnt exist or the value is full, the listings will default to include all attributes of shard ranges. But if params has includes/marker/end_marker then the response will be full shard ranges, regardless the header of X-Backend-Record-Shard-Format. The response header X-Backend-Record-Type will tell the user what type it gets back. Listings are not normally returned from a deleted container. However, the X-Backend-Override-Deleted header may be used with a value in swift.common.utils.TRUE_VALUES to force a shard range listing to be returned from a deleted container whose DB file still" }, { "data": "req an instance of swift.common.swob.Request an instance of swift.common.swob.Response Returns a list of objects in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Returns a list of persisted shard ranges or namespaces in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Handle HTTP HEAD request. Handle HTTP POST request. A POST request will update the containers put_timestamp, unless it has an X-Backend-No-Timestamp-Update header with a truthy value. req an instance of Request. Handle HTTP PUT request. Update or create container. Put object into container. Put shards into container. Handle HTTP REPLICATE request (json-encoded RPC calls for replication.) Handle HTTP UPDATE request (merge_items RPCs coming from the proxy.) Update the account server(s) with latest container info. req swob.Request object account account name container container name broker container DB broker object if all the account requests return a 404 error code, HTTPNotFound response object, if the account cannot be updated due to a malformed header, an HTTPBadRequest response object, otherwise None. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf Validate that the index supplied maps to a policy. policy index from request, or None if not present HTTPBadRequest if the supplied index is bogus ContainerSyncCluster instance for validating sync-to values. Perform mutation to container listing records that are common to all serialization formats, and returns it as a dict. Converts created time to iso timestamp. Replaces size with swift_bytes content type parameter. record object entry record modified record Return the shard_range database record as a dict, the keys will depend on the database fields provided in the record. record shard entry record, either ShardRange or Namespace. shardrecordfull boolean, when true the timestamp field is added as last_modified in iso format. dict suitable for listing responses paste.deploy app factory for creating WSGI container server apps Convert container info dict to headers. Split and validate path for a container. req a swob request a tuple of path parts as strings Split and validate path for an object. req a swob request a tuple of path parts as strings Bases: Daemon Move objects that are in the wrong storage policy. Validate source object will satisfy the misplaced object queue entry and move to destination. qpolicyindex the policy_index for the source object account the account name of the misplaced object container the container name of the misplaced object obj the name of the misplaced object q_ts the timestamp of the misplaced object path the full path of the misplaced object for logging containerpolicyindex the policy_index of the destination source_ts the timestamp of the source object sourceobjstatus the HTTP status source object request sourceobjinfo the HTTP headers of the source object request sourceobjiter the body iter of the source object request Issue a DELETE request against the destination to match the misplaced DELETE against the source. Dump stats to logger, noop when stats have been already been logged in the last minute. Issue a delete object request to the container for the misplaced object queue" }, { "data": "container the misplaced objects container obj the name of the misplaced object q_ts the timestamp of the misplaced object q_record the timestamp of the queue entry N.B. qts will normally be the same time as qrecord except when an object was manually re-enqued. Process an entry and remove from queue on success. q_container the queue container qentry the rawobj name from the q_container queue_item a parsed entry from the queue Main entry point for concurrent processing of misplaced objects. Iterate over all queue entries and delegate processing to spawned workers in the pool. Process a possibly misplaced object write request. Determine correct destination storage policy by checking with primary containers. Check source and destination, copying or deleting into destination and cleaning up the source as needed. This method wraps reconcileobject for exception handling. info a queue entry dict True to indicate the request is fully processed successfully, otherwise False. Override this to run forever Process every entry in the queue. Check if a given entry should be handled by this process. container the queue container queue_item an entry from the queue Update stats tracking for metric and emit log message. Issue a delete object request to the given storage_policy. account the account name container the container name obj the object name timestamp the timestamp of the object to delete policy_index the policy index to direct the request path the path to be used for logging Add an object to the container reconcilers queue. This will cause the container reconciler to move it from its current storage policy index to the correct storage policy index. container_ring container ring account the misplaced objects account container the misplaced objects container obj the misplaced object objpolicyindex the policy index where the misplaced object currently is obj_timestamp the misplaced objects X-Timestamp. We need this to ensure that the reconciler doesnt overwrite a newer object with an older one. op the method of the operation (DELETE or PUT) force over-write queue entries newer than obj_timestamp conn_timeout max time to wait for connection to container server response_timeout max time to wait for response from container server .misplaced_object container name, False on failure. Success means a majority of containers got the update. You have to squint to see it, but the general strategy is just: return the newest (of the recreated) return the oldest I tried cleaning it up for awhile, but settled on just writing a bunch of tests instead. Once you get an intuitive sense for the nuance here you can try and see theres a better way to spell the boolean logic but it all ends up looking sorta hairy. -1 if info is correct, 1 if remote_info is better Talk directly to the primary container servers to delete a particular object listing. Does not talk to object servers; use this only when a container entry does not actually have a corresponding object. Get the name of a container into which a misplaced object should be enqueued. The name is the objects last modified time rounded down to the nearest hour. objtimestamp a string representation of the objects createdat time from its container db row. a container name Compare remote_info to info and decide if the remote storage policy index should be used instead of ours. Translate a reconciler container listing entry to a dictionary containing the parts of the misplaced object queue" }, { "data": "obj_info an entry in an a container listing with the required keys: name, content_type, and hash a queue entry dict with the keys: qpolicyindex, account, container, obj, qop, qts, q_record, and path Bases: object Encapsulates metadata associated with the process of cleaving a retiring DB. This metadata includes: ref: The unique part of the key that is used when persisting a serialized CleavingContext as sysmeta in the DB. The unique part of the key is based off the DB id. This ensures that each context is associated with a specific DB file. The unique part of the key is included in the CleavingContext but should not be modified by any caller. cursor: the upper bound of the last shard range to have been cleaved from the retiring DB. max_row: the retiring DBs max row; this is updated to the value of the retiring DBs max_row every time a CleavingContext is loaded for that DB, and may change during the process of cleaving the DB. cleavetorow: the value of max_row at the moment when cleaving starts for the DB. When cleaving completes (i.e. the cleave cursor has reached the upper bound of the cleaving namespace), cleavetorow is compared to the current max_row: if the two values are not equal then rows have been added to the DB which may not have been cleaved, in which case the CleavingContext is reset and cleaving is re-started. lastcleaveto_row: the minimum DB row from which cleaving should select objects to cleave; this is initially set to None i.e. all rows should be cleaved. If the CleavingContext is reset then the lastcleaveto_row is set to the current value of cleavetorow, which in turn is set to the current value of max_row by a subsequent call to start. The repeated cleaving therefore only selects objects in rows greater than the lastcleaveto_row, rather than cleaving the whole DB again. ranges_done: the number of shard ranges that have been cleaved from the retiring DB. ranges_todo: the number of shard ranges that are yet to be cleaved from the retiring DB. Returns a CleavingContext tracking the cleaving progress of the given brokers DB. broker an instances of ContainerBroker An instance of CleavingContext. Returns all cleaving contexts stored in the brokers DB. broker an instance of ContainerBroker list of tuples of (CleavingContext, timestamp) Persists the serialized CleavingContext as sysmeta in the given brokers DB. broker an instances of ContainerBroker Bases: ContainerSharderConf, ContainerReplicator Shards containers. Run the container sharder until stopped. Run the container sharder once. Iterates through all object rows in srcshardrange in name order yielding them in lists of up to batch_size in length. All batches of rows that are not marked deleted are yielded before all batches of rows that are marked deleted. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. since_row include only object rows whose ROWID is greater than the given row id; by default all object rows are included. batch_size The maximum number of object rows to include in each yielded batch; defaults to cleaverowbatch_size. a generator of tuples of (list of rows, broker info dict) Iterates through all object rows in srcshardrange to place them in destination shard ranges provided by the destshardranges function. Yields tuples of (batch of object rows, destination shard range in which those object rows belong, broker" }, { "data": "If no destination shard range exists for a batch of object rows then tuples are yielded of (batch of object rows, None, broker info). This indicates to the caller that there are a non-zero number of object rows for which no destination shard range was found. Note that the same destination shard range may be referenced in more than one yielded tuple. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. destshardranges A function which should return a list of destination shard ranges sorted in the order defined by sort_key(). a generator of tuples of (object row list, shard range, broker info dict) where shard_range may be None. Bases: object Combines new and existing shard ranges based on most recent state. newshardranges a list of ShardRange instances. existingshardranges a list of ShardRange instances. a list of ShardRange instances. Update donor shard ranges to shrinking state and merge donors and acceptors to broker. broker A ContainerBroker. acceptor_ranges A list of ShardRange that are to be acceptors. donor_ranges A list of ShardRange that are to be donors; these will have their state and timestamp updated. timestamp timestamp to use when updating donor state Find sequences of shard ranges that could be compacted into a single acceptor shard range. This function does not modify shard ranges. broker A ContainerBroker. shrink_threshold the number of rows below which a shard may be considered for shrinking into another shard expansion_limit the maximum number of rows that an acceptor shard range should have after other shard ranges have been compacted into it max_shrinking the maximum number of shard ranges that should be compacted into each acceptor; -1 implies unlimited. max_expanding the maximum number of acceptors to be found (i.e. the maximum number of sequences to be returned); -1 implies unlimited. include_shrinking if True then existing compactible sequences are included in the results; default is False. A list of ShardRangeList each containing a sequence of neighbouring shard ranges that may be compacted; the final shard range in the list is the acceptor Find all pairs of overlapping ranges in the given list. shard_ranges A list of ShardRange excludeparentchild If True then overlapping pairs that have a parent-child relationship within the past time period time_period are excluded from the returned set. Default is False. time_period the specified past time period in seconds. Value of 0 means all time in the past. a set of tuples, each tuple containing ranges that overlap with each other. Returns a list of all continuous paths through the shard ranges. An individual path may not necessarily span the entire namespace, but it will span a continuous namespace without gaps. shard_ranges A list of ShardRange. A list of ShardRangeList. Find gaps in the shard ranges and pairs of shard range paths that lead to and from those gaps. For each gap a single pair of adjacent paths is selected. The concatenation of all selected paths and gaps will span the entire namespace with no overlaps. shard_ranges a list of instances of ShardRange. within_range an optional ShardRange that constrains the search space; the method will only return gaps within this range. The default is the entire namespace. A list of tuples of (startpath, gaprange, end_path) where start_path is a list of ShardRanges leading to the gap, gap_range is a ShardRange synthesized to describe the namespace gap, and end_path is a list of ShardRanges leading from the" }, { "data": "When gaps start or end at the namespace minimum or maximum bounds, startpath and endpath may be null paths that contain a single ShardRange covering either the minimum or maximum of the namespace. Transform the given sequences of shard ranges into a list of acceptors and a list of shrinking donors. For each given sequence the final ShardRange in the sequence (the acceptor) is expanded to accommodate the other ShardRanges in the sequence (the donors). The donors and acceptors are then merged into the broker. broker A ContainerBroker. sequences A list of ShardRangeList Sorts the given list of paths such that the most preferred path is the first item in the list. paths A list of ShardRangeList. shardrangeto_span An instance of ShardRange that describes the namespace that would ideally be spanned by a path. Paths that include this namespace will be preferred over those that do not. A sorted list of ShardRangeList. Update the ownshardrange with the up-to-date object stats from the broker. Note: this method does not persist the updated ownshardrange; callers should use broker.mergeshardranges if the updated stats need to be persisted. broker an instance of ContainerBroker. ownshardrange and instance of ShardRange. ownshardrange with up-to-date object_count and bytes_used. Bases: Daemon Daemon to sync syncable containers. This is done by scanning the local devices for container databases and checking for x-container-sync-to and x-container-sync-key metadata values. If they exist, newer rows since the last sync will trigger PUTs or DELETEs to the other container. The actual syncing is slightly more complicated to make use of the three (or number-of-replicas) main nodes for a container without each trying to do the exact same work but also without missing work if one node happens to be down. Two sync points are kept per container database. All rows between the two sync points trigger updates. Any rows newer than both sync points cause updates depending on the nodes position for the container (primary nodes do one third, etc. depending on the replica count of course). After a sync run, the first sync point is set to the newest ROWID known and the second sync point is set to newest ROWID for which all updates have been sent. An example may help. Assume replica count is 3 and perfectly matching ROWIDs starting at 1. First sync run, database has 6 rows: SyncPoint1 starts as -1. SyncPoint2 starts as -1. No rows between points, so no all updates rows. Six rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node 3. SyncPoint1 is set as 6 (the newest ROWID known). SyncPoint2 is left as -1 since no all updates rows were synced. Next sync run, database has 12 rows: SyncPoint1 starts as 6. SyncPoint2 starts as -1. The rows between -1 and 6 all trigger updates (most of which should short-circuit on the remote end as having already been done). Six more rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node SyncPoint1 is set as 12 (the newest ROWID known). SyncPoint2 is set as 6 (the newest all updates ROWID). In this way, under normal circumstances each node sends its share of updates each run and just sends a batch of older updates to ensure nothing was missed. conf The dict of configuration values from the [container-sync] section of the" }, { "data": "containerring If None, the <swiftdir>/container.ring.gz will be loaded. This is overridden by unit tests. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf The dict of configuration values from the [container-sync] section of the container-server.conf. Number of successful DELETEs triggered. Number of containers that had a failure of some type. Number of successful PUTs triggered. swift.common.ring.Ring for locating containers. Number of containers whose sync has been turned off, but are not yet cleared from the sync store. Per container stats. These are collected per container. puts - the number of puts that were done for the container deletes - the number of deletes that were fot the container bytes - the total number of bytes transferred per the container Checks the given path for a container database, determines if syncing is turned on for that database and, if so, sends any updates to the other container. path the path to a container db Sends the update the row indicates to the sync_to container. Update can be either delete or put. row The updated row in the local database triggering the sync update. sync_to The URL to the remote container. user_key The X-Container-Sync-Key to use when sending requests to the other container. broker The local container database broker. info The get_info result from the local container database broker. realm The realm from self.realms_conf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. realmkey The realm key from self.realmsconf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. True on success Number of containers with sync turned on that were successfully synced. Maximum amount of time to spend syncing a container before moving on to the next one. If a container sync hasnt finished in this time, itll just be resumed next scan. Path to the local device mount points. Minimum time between full scans. This is to keep the daemon from running wild on near empty systems. Logger to use for container-sync log lines. Indicates whether mount points should be verified as actual mount points (normally true, false for tests and SAIO). ContainerSyncCluster instance for validating sync-to values. Writes a report of the stats to the logger and resets the stats for the next report. Time of last stats report. Runs container sync scans until stopped. Runs a single container sync scan. ContainerSyncStore instance for iterating over synced containers Bases: Daemon Update container information in account listings. Report container info to an account server. node node dictionary from the account ring part partition the account is on container container name put_timestamp put timestamp delete_timestamp delete timestamp count object count in the container bytes bytes used in the container storagepolicyindex the policy index for the container Walk the path looking for container DBs and process them. path path to walk Get the account ring. Load it if it hasnt been yet. Get paths to all of the partitions on each drive to be processed. a list of paths Process a container, and update the information in the account. dbfile container DB to process Run the updater continuously. Run the updater once. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "misc.html#module-swift.common.memcached.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: object Walk through file system to audit objects Entrypoint to object_audit, with a failsafe generic exception handler. Audits the given object location. location an audit location (from diskfile.objectauditlocation_generator) Based on configs objectsizestats will keep track of how many objects fall into the specified ranges. For example with the following: objectsizestats = 10, 100, 1024 and your system has 3 objects of sizes: 5, 20, and 10000 bytes the log will look like: {10: 1, 100: 1, 1024: 0, OVER: 1} Bases: Daemon Audit objects. Parallel audit loop Clear recon cache entries Child execution Run the object audit Run the object audit until stopped. Run the object audit once Bases: object Run the user-supplied watcher. Simple and gets the job done. Note that we arent doing anything to isolate ourselves from hangs or file descriptor leaks in the plugins. Disk File Interface for the Swift Object Server The DiskFile, DiskFileWriter and DiskFileReader classes combined define the on-disk abstraction layer for supporting the object server REST API interfaces (excluding REPLICATE). Other implementations wishing to provide an alternative backend for the object server must implement the three classes. An example alternative implementation can be found in the memserver.py and memdiskfile.py modules along size this one. The DiskFileManager is a reference implemenation specific class and is not part of the backend API. The remaining methods in this module are considered implementation specific and are also not considered part of the backend API. Bases: object Represents an object location to be audited. Other than being a bucket of data, the only useful thing this does is stringify to a filesystem path so the auditors logs look okay. Bases: object Manage object files. This specific implementation manages object files on a disk formatted with a POSIX-compliant file system that supports extended attributes as metadata on a file or directory. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. The following path format is used for data file locations: <devicespath/<devicedir>/<datadir>/<partdir>/<suffixdir>/<hashdir>/ <datafile>.<ext> mgr associated DiskFileManager instance device_path path to the target device or drive partition partition on the device in which the object lives account account name for the object container container name for the object obj object name for the object _datadir override the full datadir otherwise constructed here policy the StoragePolicy instance use_splice if true, use zero-copy splice() to send data pipe_size size of pipe buffer used in zero-copy operations open_expired if True, open() will not raise a DiskFileExpired if object is expired nextpartpower the next partition power to be used Context manager to create a file. We create a temporary file first, and then return a DiskFileWriter object to encapsulate the state. Note An implementation is not required to perform on-disk preallocations even if the parameter is specified. But if it does and it fails, it must raise a DiskFileNoSpace exception. size optional initial size of file to explicitly allocate on disk extension file extension to use for the newly-created file; defaults to .data for the sake of tests DiskFileNoSpace if a size is specified and allocation fails Delete the object. This implementation creates a tombstone file using the given timestamp, and removes any older versions of the object file. Any file that has an older timestamp than timestamp will be deleted. Note An implementation is free to use or ignore the timestamp parameter. timestamp timestamp to compare with each file DiskFileError this implementation will raise the same errors as the create()" }, { "data": "Provides the timestamp of the newest data file found in the object directory. A Timestamp instance, or None if no data file was found. DiskFileNotOpen if the open() method has not been previously called on this instance. Provide the datafile metadata for a previously opened object as a dictionary. This is metadata that was included when the object was first PUT, and does not include metadata set by any subsequent POST. objects datafile metadata dictionary DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Provide the metadata for a previously opened object as a dictionary. objects metadata dictionary DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Provide the metafile metadata for a previously opened object as a dictionary. This is metadata that was written by a POST and does not include any persistent metadata that was set by the original PUT. objects .meta file metadata dictionary, or None if there is no .meta file DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Open the object. This implementation opens the data file representing the object, reads the associated metadata in the extended attributes, additionally combining metadata from fast-POST .meta files. modernize if set, update this diskfile to the latest format. Currently, this means adding metadata checksums if none are present. current_time Unix time used in checking expiration. If not present, the current time will be used. Note An implementation is allowed to raise any of the following exceptions, but is only required to raise DiskFileNotExist when the object representation does not exist. DiskFileCollision on name mis-match with metadata DiskFileNotExist if the object does not exist DiskFileDeleted if the object was previously deleted DiskFileQuarantined if while reading metadata of the file some data did pass cross checks itself for use as a context manager Return the metadata for an object without requiring the caller to open the object first. current_time Unix time used in checking expiration. If not present, the current time will be used. metadata dictionary for an object DiskFileError this implementation will raise the same errors as the open() method. Return a swift.common.swob.Response class compatible app_iter object as defined by swift.obj.diskfile.DiskFileReader. For this implementation, the responsibility of closing the open file is passed to the swift.obj.diskfile.DiskFileReader object. keep_cache callers preference for keeping data read in the OS buffer cache cooperative_period the period parameter for cooperative yielding during file read quarantinehook 1-arg callable called when obj quarantined; the arg is the reason for quarantine. Default is to ignore it. Not needed by the REST layer. a swift.obj.diskfile.DiskFileReader object Write a block of metadata to an object without requiring the caller to create the object first. Supports fast-POST behavior semantics. metadata dictionary of metadata to be associated with the object DiskFileError this implementation will raise the same errors as the create() method. Bases: object Management class for devices, providing common place for shared parameters and methods not provided by the DiskFile class (which primarily services the object server REST API layer). The get_diskfile() method is how this implementation creates a DiskFile object. Note This class is reference implementation specific and not part of the pluggable on-disk backend API. Note TODO(portante): Not sure what the right name to recommend here, as manager seemed generic enough, though suggestions are welcome. conf caller provided configuration object logger caller provided logger Clean up on-disk files that are obsolete and gather the set of valid on-disk files for an object. hsh_path object hash path frag_index if set, search for a specific fragment index .data file, otherwise accept the first valid" }, { "data": "file a dict that may contain: valid on disk files keyed by their filename extension; a list of obsolete files stored under the key obsolete; a list of files remaining in the directory, reverse sorted, stored under the key files. Take whats in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid. partition_dir absolute path to partition dir containing hashes.pkl and hashes.invalid a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Construct the path to a device without checking if it is mounted. device name of target device full path to the device Return the path to a device, first checking to see if either it is a proper mount point, or at least a directory depending on the mount_check configuration option. device name of target device mount_check whether or not to check mountedness of device. Defaults to bool(self.mount_check). full path to the device, None if the path to the device is not a proper mount point or directory. Returns a BaseDiskFile instance for an object based on the objects partition, path parts and policy. device name of target device partition partition on device in which the object lives account account name for the object container container name for the object obj object name for the object policy the StoragePolicy instance Returns a tuple of (a DiskFile instance for an object at the given object_hash, the basenames of the files in the objects hash dir). Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead. device name of target device partition partition on the device in which the object lives object_hash the hash of an object path policy the StoragePolicy instance DiskFileNotExist if the object does not exist a tuple comprising (an instance of BaseDiskFile, a list of file basenames) Returns a BaseDiskFile instance for an object at the given AuditLocation. audit_location object location to be audited Returns a DiskFile instance for an object at the given object_hash. Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead. device name of target device partition partition on the device in which the object lives object_hash the hash of an object path policy the StoragePolicy instance DiskFileNotExist if the object does not exist an instance of BaseDiskFile device name of target device partition partition name suffixes a list of suffix directories to be recalculated policy the StoragePolicy instance skip_rehash just mark the suffixes dirty; return None a dictionary that maps suffix directories Given a simple list of files names, determine the files that constitute a valid fileset i.e. a set of files that defines the state of an object, and determine the files that are obsolete and could be deleted. Note that some files may fall into neither category. If a file is considered part of a valid fileset then its info dict will be added to the results dict, keyed by <extension>_info. Any files that are no longer required will have their info dicts added to a list stored under the key obsolete. The results dict will always contain entries with keys ts_file, datafile and metafile. Their values will be the fully qualified path to a file of the corresponding type if there is such a file in the valid fileset, or" }, { "data": "files a list of file names. datadir directory name files are from; this is used to construct file paths in the results, but the datadir is not modified by this method. verify if True verify that the ondisk file contract has not been violated, otherwise do not verify. policy storage policy used to store the files. Used to validate fragment indexes for EC policies. ts_file -> path to a .ts file or None data_file -> path to a .data file or None meta_file -> path to a .meta file or None ctype_file -> path to a .meta file or None ts_info -> a file info dict for a .ts file data_info -> a file info dict for a .data file meta_info -> a file info dict for a .meta file ctype_info -> a file info dict for a .meta file which contains the content-type value unexpected -> a list of file paths for unexpected files possible_reclaim -> a list of file info dicts for possible reclaimable files obsolete -> a list of file info dicts for obsolete files Invalidates the hash for a suffix_dir in the partitions hashes file. suffix_dir absolute path to suffix dir whose hash needs invalidating Returns filename for given timestamp. timestamp the object timestamp, an instance of Timestamp ext an optional string representing a file extension to be appended to the returned file name ctype_timestamp an optional content-type timestamp, an instance of Timestamp a file name Yield an AuditLocation for all objects stored under device_dirs. policy the StoragePolicy instance device_dirs directory of target device auditor_type either ALL or ZBF Parse an on disk file name. filename the file name including extension policy storage policy used to store the file a dict, with keys for timestamp, ext and ctype_timestamp: timestamp is a Timestamp ctype_timestamp is a Timestamp or None for .meta files, otherwise None ext is a string, the file extension including the leading dot or the empty string if the filename has no extension. Subclasses may override this method to add further keys to the returned dict. DiskFileError if any part of the filename is not able to be validated. A context manager that will lock on the partition given. device device targeted by the lock request policy policy targeted by the lock request partition partition targeted by the lock request PartitionLockTimeout If the lock on the partition cannot be granted within the configured timeout. Write data describing a container update notification to a pickle file in the async_pending directory. device name of target device account account name for the object container container name for the object obj object name for the object data update data to be written to pickle file timestamp a Timestamp policy the StoragePolicy instance In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it. The path to the device the corrupted file is on. The path to the file you want quarantined. path (str) of directory the file was moved to OSError re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename A context manager that will lock on the partition and, if configured to do so, on the device given. device name of target device policy policy targeted by the replication request partition partition targeted by the replication request ReplicationLockTimeout If the lock on the device cannot be granted within the configured timeout. Yields tuples of (hash_only, timestamps) for object information stored for the given device, partition, and (optionally)" }, { "data": "If suffixes is None, all stored suffixes will be searched for object hashes. Note that if suffixes is not None but empty, such as [], then nothing will be yielded. timestamps is a dict which may contain items mapping: ts_data -> timestamp of data or tombstone file, ts_meta -> timestamp of meta file, if one exists content-type value, if one exists durable -> True if data file at ts_data is durable, False otherwise where timestamps are instances of Timestamp device name of target device partition partition name policy the StoragePolicy instance suffixes optional list of suffix directories to be searched Yields tuples of (fullpath, suffixonly) for suffixes stored on the given device and partition. device name of target device partition partition name policy the StoragePolicy instance Bases: object Encapsulation of the WSGI read context for servicing GET REST API requests. Serves as the context manager object for the swift.obj.diskfile.DiskFile classs swift.obj.diskfile.DiskFile.reader() method. Note The quarantining behavior of this method is considered implementation specific, and is not required of the API. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. fp open file object pointer reference data_file on-disk data file name for the object obj_size verified on-disk size of the object etag expected metadata etag value for entire file diskchunksize size of reads from disk in bytes keepcachesize maximum object size that will be kept in cache device_path on-disk device path, used when quarantining an obj logger logger caller wants this object to use quarantine_hook 1-arg callable called w/reason when quarantined use_splice if true, use zero-copy splice() to send data pipe_size size of pipe buffer used in zero-copy operations diskfile the diskfile creating this DiskFileReader instance keep_cache should resulting reads be kept in the buffer cache cooperative_period the period parameter when does cooperative yielding during file read Returns an iterator over the data file for range (start, stop) Returns an iterator over the data file for a set of ranges Close the open file handle if present. For this specific implementation, this method will handle quarantining the file if necessary. Does some magic with splice() and tee() to move stuff from disk to network without ever touching userspace. wsockfd file descriptor (integer) of the socket out which to send data Bases: object Encapsulation of the write context for servicing PUT REST API requests. Serves as the context manager object for the swift.obj.diskfile.DiskFile classs swift.obj.diskfile.DiskFile.create() method. Note It is the responsibility of the swift.obj.diskfile.DiskFile.create() method context manager to close the open file descriptor. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. name name of object from REST API datadir on-disk directory object will end up in on swift.obj.diskfile.DiskFileWriter.put() fd open file descriptor of temporary file to receive data tmppath full path name of the opened file descriptor bytespersync number bytes written between sync calls diskfile the diskfile creating this DiskFileWriter instance nextpartpower the next partition power to be used extension the file extension to be used; may be used internally to distinguish between PUT/POST/DELETE operations Expose internal stats about written chunks. a tuple, (upload_size, etag) Perform any operations necessary to mark the object as durable. For replication policy type this is a no-op. timestamp object put timestamp, an instance of Timestamp Finalize writing the file on disk. metadata dictionary of metadata to be associated with the object Write a chunk of data to" }, { "data": "All invocations of this method must come before invoking the :func: For this implementation, the data is written into a temporary file. chunk the chunk of data to write as a string object Bases: BaseDiskFile alias of DiskFileReader alias of DiskFileWriter Bases: BaseDiskFileManager alias of DiskFile Bases: BaseDiskFileReader Bases: object Bases: BaseDiskFileWriter Finalize writing the file on disk. metadata dictionary of metadata to be associated with the object Bases: BaseDiskFile Provides the timestamp of the newest durable file found in the object directory. A Timestamp instance, or None if no durable file was found. DiskFileNotOpen if the open() method has not been previously called on this instance. Provides information about all fragments that were found in the object directory, including fragments without a matching durable file, and including any fragment chosen to construct the opened diskfile. A dict mapping <Timestamp instance> -> <list of frag indexes>, or None if the diskfile has not been opened or no fragments were found. Remove a tombstone file matching the specified timestamp or datafile matching the specified timestamp and fragment index from the object directory. This provides the EC reconstructor/ssync process with a way to remove a tombstone or fragment from a handoff node after reverting it to its primary node. The hash will be invalidated, and if empty the hsh_path will be removed immediately. timestamp the object timestamp, an instance of Timestamp frag_index fragment archive index, must be a whole number or None. nondurablepurgedelay only remove a non-durable data file if its been on disk longer than this many seconds. meta_timestamp if not None then remove any meta file with this timestamp alias of ECDiskFileReader alias of ECDiskFileWriter Bases: BaseDiskFileManager alias of ECDiskFile Returns the EC specific filename for given timestamp. timestamp the object timestamp, an instance of Timestamp ext an optional string representing a file extension to be appended to the returned file name frag_index a fragment archive index, used with .data extension only, must be a whole number. ctype_timestamp an optional content-type timestamp, an instance of Timestamp durable if True then include a durable marker in data filename. a file name DiskFileError if ext==.data and the kwarg frag_index is not a whole number Returns timestamp(s) and other info extracted from a policy specific file name. For EC policy the data file name includes a fragment index and possibly a durable marker, both of which must be stripped off to retrieve the timestamp. filename the file name including extension ctype_timestamp: timestamp is a Timestamp frag_index is an int or None ctype_timestamp is a Timestamp or None for .meta files, otherwise None ext is a string, the file extension including the leading dot or the empty string if the filename has no extension durable is a boolean that is True if the filename is a data file that includes a durable marker DiskFileError if any part of the filename is not able to be validated. Return int representation of frag_index, or raise a DiskFileError if frag_index is not a whole number. frag_index a fragment archive index policy storage policy used to validate the index against Bases: BaseDiskFileReader Bases: BaseDiskFileWriter Finalize put by renaming the object data file to include a durable marker. We do this for EC policy because it requires a 2-phase put commit confirmation. timestamp object put timestamp, an instance of Timestamp DiskFileError if the diskfile frag_index has not been set (either during initialisation or a call to put()) The only difference between this method and the replication policy DiskFileWriter method is adding the frag index to the" }, { "data": "metadata dictionary of metadata to be associated with object Take whats in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid. partition_dir absolute path to partition dir containing hashes.pkl and hashes.invalid a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Extracts the policy for an object (based on the name of the objects directory) given the device-relative path to the object. Returns None in the event that the path is malformed in some way. The device-relative path is everything after the mount point; for example: 485dc017205a81df3af616d917c90179/1401811134.873649.data would have device-relative path: objects-5/30/179/485dc017205a81df3af616d917c90179/1401811134.873649.data obj_path device-relative path of an object, or the full path a BaseStoragePolicy or None Get the async dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. asyncpending or asyncpending-<N> as appropriate Get the data dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. objects or objects-<N> as appropriate Given the device path, policy, and partition, returns the full path to the partition Get the temp dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. tmp or tmp-<N> as appropriate Invalidates the hash for a suffix_dir in the partitions hashes file. suffix_dir absolute path to suffix dir whose hash needs invalidating Given a devices path (e.g. /srv/node), yield an AuditLocation for all objects stored under that directory for the given datadir (policy), if devicedirs isnt set. If devicedirs is set, only yield AuditLocation for the objects under the entries in device_dirs. The AuditLocation only knows the path to the hash directory, not to the .data file therein (if any). This is to avoid a double listdir(hash_dir); the DiskFile object will always do one, so we dont. devices parent directory of the devices to be audited datadir objects directory mount_check flag to check if a mount check should be performed on devices logger a logger object device_dirs a list of directories under devices to traverse auditor_type either ALL or ZBF In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it. The path to the device the corrupted file is on. The path to the file you want quarantined. path (str) of directory the file was moved to OSError re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename Read the existing hashes.pkl a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Helper function to read the pickled metadata from an object file. fd file descriptor or filename to load the metadata from addmissingchecksum if set and checksum is missing, add it dictionary of metadata Hard-links a file located in target_path using the second path newtargetpath. Creates intermediate directories if required. target_path current absolute filename newtargetpath new absolute filename for the hardlink ignore_missing if True then no exception is raised if the link could not be made because target_path did not exist, otherwise an OSError will be raised. OSError if the hard link could not be created, unless the intended hard link already exists or the target_path does not exist and must_exist if False. True if the link was created by the call to this method, False otherwise. Write hashes to hashes.pkl The updated key is added to hashes before it is" }, { "data": "Helper function to write pickled metadata for an object file. fd file descriptor or filename to write the metadata metadata metadata to write Bases: Daemon Replicate objects. Encapsulates most logic and data needed by the object replication process. Each call to .replicate() performs one replication pass. Its up to the caller to do this in a loop. Helper function for collect_jobs to build jobs for replication using replication style storage policy Check to see if the ring has been updated :param object_ring: the ring to check boolean indicating whether or not the ring has changed Returns a sorted list of jobs (dictionaries) that specify the partitions, nodes, etc to be rsynced. override_devices if set, only jobs on these devices will be returned override_partitions if set, only jobs on these partitions will be returned override_policies if set, only jobs in these storage policies will be returned Returns a set of all local devices in all replication-type storage policies. This is the device names, e.g. sdq or d1234 or something, not the full ring entries. For each worker yield a (possibly empty) dict of kwargs to pass along to the daemons run() method after fork. The length of elements returned from this method will determine the number of processes created. If the returned iterable is empty, the Strategy will fallback to run-inline strategy. once False if the worker(s) will be daemonized, True if the worker(s) will be run once kwargs plumbed through via command line argparser an iterable of dicts, each element represents the kwargs to be passed to a single workers run() method after fork. Loop that runs in the background during replication. It periodically logs progress. Check whether our set of local devices remains the same. If devices have been added or removed, then we return False here so that we can kill off any worker processes and then distribute the new set of local devices across a new set of workers so that all devices are, once again, being worked on. This function may also cause recon stats to be updated. False if any local devices have been added or removed, True otherwise Make sure the policys rings are loaded. policy the StoragePolicy instance appropriate ring object Override this to do something after running using multiple worker processes. This method is called in the parent process. This is probably only useful for run-once mode since there is no after running in run-forever mode. Run a replication pass High-level method that replicates a single partition that doesnt belong on this node. job a dict containing info about the partition to be replicated Uses rsync to implement the sync method. This was the first sync method in Swift. Override this to run forever Override this to run the script once Logs various stats for the currently running replication pass. Synchronize local suffix directories from a partition with a remote node. node the dev entry for the remote node to sync with job information about the partition being synced suffixes a list of suffixes which need to be pushed boolean and dictionary, boolean indicating success or failure High-level method that replicates a single partition. job a dict containing info about the partition to be replicated Bases: object Note the failure of one or more devices. failures a list of (ip, device-name) pairs that failed Bases: object Sends SSYNC requests to the object server. These requests are eventually handled by ssync_receiver and full documentation about the process is" }, { "data": "Establishes a connection and starts an SSYNC request with the object server. Closes down the connection to the object server once done with the SSYNC request. Handles the sender-side of the MISSING_CHECK step of a SSYNC request. Full documentation of this can be found at Receiver.missing_check(). Sends a DELETE subrequest with the given information. Sends a PUT subrequest for the url_path using the source df (DiskFile) and content_length. Handles the sender-side of the UPDATES step of an SSYNC request. Full documentation of this can be found at Receiver.updates(). Bases: BufferedHTTPConnection alias of SsyncBufferedHTTPResponse Bases: BufferedHTTPResponse, object Reads a line from the SSYNC response body. httplib has no readline and will block on read(x) until x is read, so we have to do the work ourselves. A bit of this is taken from Pythons httplib itself. Parse missing_check line parts to determine which parts of local diskfile were wanted by the receiver. The encoder for parts is encode_wanted() Returns a string representing the object hash, its data file timestamp, the delta forwards to its metafile and content-type timestamps, if non-zero, and its durability, in the form: <hash> <tsdata> [m:<hex delta to tsmeta>[,t:<hex delta to ts_ctype>] [,durable:False] The decoder for this line is decode_missing() Bases: object Handles incoming SSYNC requests to the object server. These requests come from the object-replicator daemon that uses ssync_sender. The number of concurrent SSYNC requests is restricted by use of a replication_semaphore and can be configured with the object-server.conf [object-server] replication_concurrency setting. An SSYNC request is really just an HTTP conduit for sender/receiver replication communication. The overall SSYNC request should always succeed, but it will contain multiple requests within its request and response bodies. This hack is done so that replication concurrency can be managed. The general process inside an SSYNC request is: Initialize the request: Basic request validation, mount check, acquire semaphore lock, etc.. Missing check: Sender sends the hashes and timestamps of the object information it can send, receiver sends back the hashes it wants (doesnt have or has an older timestamp). Updates: Sender sends the object information requested. Close down: Release semaphore lock, etc. Basic validation of request and mount check. This function will be called before attempting to acquire a replication semaphore lock, so contains only quick checks. Handles the receiver-side of the MISSING_CHECK step of a SSYNC request. Receives a list of hashes and timestamps of object information the sender can provide and responds with a list of hashes desired, either because theyre missing or have an older timestamp locally. The process is generally: Sender sends :MISSING_CHECK: START and begins sending hash timestamp lines. Receiver gets :MISSING_CHECK: START and begins reading the hash timestamp lines, collecting the hashes of those it desires. Sender sends :MISSING_CHECK: END. Receiver gets :MISSING_CHECK: END, responds with :MISSING_CHECK: START, followed by the list of <wanted_hash> specifiers it collected as being wanted (one per line), :MISSING_CHECK: END, and flushes any buffers. Each <wanted_hash> specifier has the form <hash>[ <parts>] where <parts> is a string containing characters d and/or m indicating that only data or meta part of object respectively is required to be syncd. Sender gets :MISSING_CHECK: START and reads the list of hashes desired by the receiver until reading :MISSING_CHECK: END. The collection and then response is so the sender doesnt have to read while it writes to ensure network buffers dont fill up and block everything. Handles the UPDATES step of an SSYNC request. Receives a set of PUT and DELETE subrequests that will be routed to the object server itself for" }, { "data": "These contain the information requested by the MISSING_CHECK step. The PUT and DELETE subrequests are formatted pretty much exactly like regular HTTP requests, excepting the HTTP version on the first request line. The process is generally: Sender sends :UPDATES: START and begins sending the PUT and DELETE subrequests. Receiver gets :UPDATES: START and begins routing the subrequests to the object server. Sender sends :UPDATES: END. Receiver gets :UPDATES: END and sends :UPDATES: START and :UPDATES: END (assuming no errors). Sender gets :UPDATES: START and :UPDATES: END. If too many subrequests fail, as configured by replicationfailurethreshold and replicationfailureratio, the receiver will hang up the request early so as to not waste any more time. At step 4, the receiver will send back an error if there were any failures (that didnt cause a hangup due to the above thresholds) so the sender knows the whole was not entirely a success. This is so the sender knows if it can remove an out of place partition, for example. Bases: Exception Parse a string of the form generated by encode_missing() and return a dict with keys objecthash, tsdata, tsmeta, tsctype, durable. The encoder for this line is encode_missing() Compare a remote and local results and generate a wanted line. remote a dict, with tsdata and tsmeta keys in the form returned by decode_missing() local a dict, possibly empty, with tsdata and tsmeta keys in the form returned Receiver.checklocal() The decoder for this line is decode_wanted() Bases: Daemon Reconstruct objects using erasure code. And also rebalance EC Fragment Archive objects off handoff nodes. Encapsulates most logic and data needed by the object reconstruction process. Each call to .reconstruct() performs one pass. Its up to the caller to do this in a loop. Aggregate per-disk rcache updates from child workers. Helper function for collect_jobs to build jobs for reconstruction using EC style storage policy N.B. If this function ever returns an empty list of jobs the entire partition will be deleted. Check to see if the ring has been updated object_ring the ring to check boolean indicating whether or not the ring has changed Helper for getting partitions in the top level reconstructor In handoffs_only mode primary partitions will not be included in the returned (possibly empty) list. For EC we can potentially revert only some of a partition so well delete reverted objects here. Note that we delete the fragment index of the file we sent to the remote node. job the job being processed objects a dict of objects to be deleted, each entry maps hash=>timestamp In testing, the pool.waitall() call very occasionally failed to return. This is an attempt to make sure the reconstructor finishes its reconstruction pass in some eventuality. Add stats for this workers run to recon cache. When in worker mode (perdiskstats == True) this workers stats are added per device instead of in the top level keys (aggregation is serialized in the parent process). total the runtime of cycle in minutes override_devices (optional) list of device that are being reconstructed Returns a set of all local devices in all EC policies. Compare the local suffix hashes with the remote suffix hashes for the given local and remote fragment indexes. Return those suffixes which should be" }, { "data": "localsuff the local suffix hashes (from get_hashes) local_index the local fragment index for the job remote_suff the remote suffix hashes (from remote REPLICATE request) remote_index the remote fragment index for the job a list of strings, the suffix dirs to sync Take the set of all local devices for this node from all the EC policies rings, and distribute them evenly into the number of workers to be spawned according to the configured worker count. If devices is given in kwargs then distribute only those devices. once False if the worker(s) will be daemonized, True if the worker(s) will be run once kwargs optional overrides from the command line Loop that runs in the background during reconstruction. It periodically logs progress. Check whether rings have changed, and maybe do a recon update. False if any ec ring has changed Utility function that kills all coroutines currently running. Make sure the policys rings are loaded. policy the StoragePolicy instance appropriate ring object Turn a set of connections from backend object servers into a generator that yields up the rebuilt fragment archive for frag_index. Override this to do something after running using multiple worker processes. This method is called in the parent process. This is probably only useful for run-once mode since there is no after running in run-forever mode. Sync the local partition with the remote node(s) according to the parameters of the job. For primary nodes, the SYNC job type will define both left and right hand sync_to nodes to ssync with as defined by this primary nodes index in the node list based on the fragment index found in the partition. For non-primary nodes (either handoff revert, or rebalance) the REVERT job will define a single node in sync_to which is the proper/new home for the fragment index. N.B. ring rebalancing can be time consuming and handoff nodes fragment indexes do not have a stable order, its possible to have more than one REVERT job for a partition, and in some rare failure conditions there may even also be a SYNC job for the same partition - but each one will be processed separately because each job will define a separate list of node(s) to sync_to. job the job dict, with the keys defined in getjob_info Run a reconstruction pass Reconstructs a fragment archive - this method is called from ssync after a remote node responds that is missing this object - the local diskfile is opened to provide metadata - but to reconstruct the missing fragment archive we must connect to multiple object servers. job job from ssync_sender. node node to which were rebuilding. df an instance of BaseDiskFile. a DiskFile like class for use by ssync. DiskFileQuarantined if the fragment archive cannot be reconstructed and has as a result been quarantined. DiskFileError if the fragment archive cannot be reconstructed. Override this to run forever Override this to run the script once Logs various stats for the currently running reconstruction pass. Bases: object This class wraps the reconstructed fragment archive data and metadata in the DiskFile interface for ssync. Bases: object Encapsulates fragment GET response data related to a single timestamp. Object Server for Swift Bases: bytes Eventlet wont send headers until its accumulated at least eventlet.wsgi.MINIMUMCHUNKSIZE bytes or the app iter is exhausted. If we want to send the response body behind Eventlets back, perhaps with some zero-copy wizardry, then we have to unclog the plumbing in eventlet.wsgi to force the headers out, so we use an EventletPlungerString to empty out all of Eventlets buffers. Bases: BaseStorageServer Implements the WSGI application for the Swift Object Server. Handle HTTP DELETE requests for the Swift Object Server. Handle HTTP GET requests for the Swift Object Server. Handle HTTP HEAD requests for the Swift Object" }, { "data": "Handle HTTP POST requests for the Swift Object Server. Handle HTTP PUT requests for the Swift Object Server. Handle REPLICATE requests for the Swift Object Server. This is used by the object replicator to get hashes for directories. Note that the name REPLICATE is preserved for historical reasons as this verb really just returns the hashes information for the specified parameters and is used, for example, by both replication and EC. Sends or saves an async update. op operation performed (ex: PUT, or DELETE) account account name for the object container container name for the object obj object name host host that the container is on partition partition that the container is on contdevice device name that the container is on headers_out dictionary of headers to send in the container request objdevice device name that the object is in policy the associated BaseStoragePolicy instance loggerthreadlocals The thread local values to be set on the self.logger to retain transaction logging information. container_path optional path in the form <account/container> to which the update should be sent. If given this path will be used instead of constructing a path from the account and container params. Update the container when objects are updated. op operation performed (ex: PUT, or DELETE) account account name for the object container container name for the object obj object name request the original request object driving the update headers_out dictionary of headers to send in the container request(s) objdevice device name that the object is in policy the BaseStoragePolicy instance Update the expiring objects container when objects are updated. op operation performed (ex: PUT, or DELETE) delete_at scheduled delete in UNIX seconds, int account account name for the object container container name for the object obj object name request the original request driving the update objdevice device name that the object is in policy the BaseStoragePolicy instance (used for tmp dir) Utility method for instantiating a DiskFile object supporting a given REST API. An implementation of the object server that wants to use a different DiskFile class would simply over-ride this method to provide that behavior. Implementation specific setup. This method is called at the very end by the constructor to allow a specific implementation to modify existing attributes or add its own attributes. conf WSGI configuration parameter paste.deploy app factory for creating WSGI object server apps Read and discard any bytes from file_like. file_like file-like object to read from read_size how big a chunk to read at a time timeout how long to wait for a read (use None for no timeout) ChunkReadTimeout if no chunk was read in time Split and validate path for an object. request a swob request a tuple of path parts and storage policy Callback for swift.common.wsgi.runwsgi during the globalconf creation so that we can add our replication_semaphore, used to limit the number of concurrent SSYNC_REQUESTS across all workers. preloadedappconf The preloaded conf for the WSGI app. This conf instance will go away, so just read from it, dont write. global_conf The global conf that will eventually be passed to the app_factory function later. This conf is created before the worker subprocesses are forked, so can be useful to set up semaphores, shared memory, etc. Bases: object Wrap an iterator to rate-limit updates on a per-bucket basis, where updates are mapped to buckets by hashing their destination path. If an update is rate-limited then it is placed on a deferral queue and may be sent later if the wrapped iterator is exhausted before the drain_until time is" }, { "data": "The deferral queue has constrained size and once the queue is full updates are evicted using a first-in-first-out policy. This policy is used because updates on the queue may have been made obsolete by newer updates written to disk, and this is more likely for updates that have been on the queue longest. The iterator increments stats as follows: The deferrals stat is incremented for each update that is rate-limited. Note that a individual update is rate-limited at most once. The skips stat is incremented for each rate-limited update that is not eventually yielded. This includes updates that are evicted from the deferral queue and all updates that remain in the deferral queue when drain_until time is reached and the iterator terminates. The drains stat is incremented for each rate-limited update that is eventually yielded. Consequently, when this iterator terminates, the sum of skips and drains is equal to the number of deferrals. updateiterable an asyncpending update iterable logger a logger instance stats a SweepStats instance num_buckets number of buckets to divide container hashes into, the more buckets total the less containers to a bucket (once a busy container slows down a bucket the whole bucket starts deferring) maxelementspergroupper_second tunable, when deferring kicks in maxdeferredelements maximum number of deferred elements before skipping starts. Each bucket may defer updates, but once the total number of deferred updates summed across all buckets reaches this value then all buckets will skip subsequent updates. drain_until time at which any remaining deferred elements must be skipped and the iterator stops. Once the wrapped iterator has been exhausted, this iterator will drain deferred elements from its buckets until either all buckets have drained or this time is reached. Bases: Daemon Update object information in container listings. Get the container ring. Load it, if it hasnt been yet. If there are async pendings on the device, walk each one and update. device path to device Perform the object update to the container node node dictionary from the container ring part partition that holds the container op operation performed (ex: PUT or DELETE) obj object name being updated headers_out headers to send with the update a tuple of (success, node_id, redirect) where success is True if the update succeeded, node_id is the_id of the node updated and redirect is either None or a tuple of (a path, a timestamp string). Process the object information to be updated and update. update_path path to pickled object update file device path to device policy storage policy of object update update the un-pickled update data kwargs un-used keys from update_ctx Run the updater continuously. Run the updater once. Bases: EventletRateLimiter Extends EventletRateLimiter to also maintain a deque of items that have been deferred due to rate-limiting, and to provide a comparator for sorting instanced by readiness. Bases: object Stats bucket for an update sweep A measure of the rate at which updates are being rate-limited is: ``` deferrals / (deferrals + successes + failures - drains) ``` A measure of the rate at which updates are not being sent during a sweep is: ``` skips / (skips + successes + failures) ``` Split the account and container parts out of the async update data. N.B. updates to shards set the container_path key while the account and container keys are always the root. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "misc.html#module-swift.common.internal_client.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: object Walk through file system to audit objects Entrypoint to object_audit, with a failsafe generic exception handler. Audits the given object location. location an audit location (from diskfile.objectauditlocation_generator) Based on configs objectsizestats will keep track of how many objects fall into the specified ranges. For example with the following: objectsizestats = 10, 100, 1024 and your system has 3 objects of sizes: 5, 20, and 10000 bytes the log will look like: {10: 1, 100: 1, 1024: 0, OVER: 1} Bases: Daemon Audit objects. Parallel audit loop Clear recon cache entries Child execution Run the object audit Run the object audit until stopped. Run the object audit once Bases: object Run the user-supplied watcher. Simple and gets the job done. Note that we arent doing anything to isolate ourselves from hangs or file descriptor leaks in the plugins. Disk File Interface for the Swift Object Server The DiskFile, DiskFileWriter and DiskFileReader classes combined define the on-disk abstraction layer for supporting the object server REST API interfaces (excluding REPLICATE). Other implementations wishing to provide an alternative backend for the object server must implement the three classes. An example alternative implementation can be found in the memserver.py and memdiskfile.py modules along size this one. The DiskFileManager is a reference implemenation specific class and is not part of the backend API. The remaining methods in this module are considered implementation specific and are also not considered part of the backend API. Bases: object Represents an object location to be audited. Other than being a bucket of data, the only useful thing this does is stringify to a filesystem path so the auditors logs look okay. Bases: object Manage object files. This specific implementation manages object files on a disk formatted with a POSIX-compliant file system that supports extended attributes as metadata on a file or directory. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. The following path format is used for data file locations: <devicespath/<devicedir>/<datadir>/<partdir>/<suffixdir>/<hashdir>/ <datafile>.<ext> mgr associated DiskFileManager instance device_path path to the target device or drive partition partition on the device in which the object lives account account name for the object container container name for the object obj object name for the object _datadir override the full datadir otherwise constructed here policy the StoragePolicy instance use_splice if true, use zero-copy splice() to send data pipe_size size of pipe buffer used in zero-copy operations open_expired if True, open() will not raise a DiskFileExpired if object is expired nextpartpower the next partition power to be used Context manager to create a file. We create a temporary file first, and then return a DiskFileWriter object to encapsulate the state. Note An implementation is not required to perform on-disk preallocations even if the parameter is specified. But if it does and it fails, it must raise a DiskFileNoSpace exception. size optional initial size of file to explicitly allocate on disk extension file extension to use for the newly-created file; defaults to .data for the sake of tests DiskFileNoSpace if a size is specified and allocation fails Delete the object. This implementation creates a tombstone file using the given timestamp, and removes any older versions of the object file. Any file that has an older timestamp than timestamp will be deleted. Note An implementation is free to use or ignore the timestamp parameter. timestamp timestamp to compare with each file DiskFileError this implementation will raise the same errors as the create()" }, { "data": "Provides the timestamp of the newest data file found in the object directory. A Timestamp instance, or None if no data file was found. DiskFileNotOpen if the open() method has not been previously called on this instance. Provide the datafile metadata for a previously opened object as a dictionary. This is metadata that was included when the object was first PUT, and does not include metadata set by any subsequent POST. objects datafile metadata dictionary DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Provide the metadata for a previously opened object as a dictionary. objects metadata dictionary DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Provide the metafile metadata for a previously opened object as a dictionary. This is metadata that was written by a POST and does not include any persistent metadata that was set by the original PUT. objects .meta file metadata dictionary, or None if there is no .meta file DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Open the object. This implementation opens the data file representing the object, reads the associated metadata in the extended attributes, additionally combining metadata from fast-POST .meta files. modernize if set, update this diskfile to the latest format. Currently, this means adding metadata checksums if none are present. current_time Unix time used in checking expiration. If not present, the current time will be used. Note An implementation is allowed to raise any of the following exceptions, but is only required to raise DiskFileNotExist when the object representation does not exist. DiskFileCollision on name mis-match with metadata DiskFileNotExist if the object does not exist DiskFileDeleted if the object was previously deleted DiskFileQuarantined if while reading metadata of the file some data did pass cross checks itself for use as a context manager Return the metadata for an object without requiring the caller to open the object first. current_time Unix time used in checking expiration. If not present, the current time will be used. metadata dictionary for an object DiskFileError this implementation will raise the same errors as the open() method. Return a swift.common.swob.Response class compatible app_iter object as defined by swift.obj.diskfile.DiskFileReader. For this implementation, the responsibility of closing the open file is passed to the swift.obj.diskfile.DiskFileReader object. keep_cache callers preference for keeping data read in the OS buffer cache cooperative_period the period parameter for cooperative yielding during file read quarantinehook 1-arg callable called when obj quarantined; the arg is the reason for quarantine. Default is to ignore it. Not needed by the REST layer. a swift.obj.diskfile.DiskFileReader object Write a block of metadata to an object without requiring the caller to create the object first. Supports fast-POST behavior semantics. metadata dictionary of metadata to be associated with the object DiskFileError this implementation will raise the same errors as the create() method. Bases: object Management class for devices, providing common place for shared parameters and methods not provided by the DiskFile class (which primarily services the object server REST API layer). The get_diskfile() method is how this implementation creates a DiskFile object. Note This class is reference implementation specific and not part of the pluggable on-disk backend API. Note TODO(portante): Not sure what the right name to recommend here, as manager seemed generic enough, though suggestions are welcome. conf caller provided configuration object logger caller provided logger Clean up on-disk files that are obsolete and gather the set of valid on-disk files for an object. hsh_path object hash path frag_index if set, search for a specific fragment index .data file, otherwise accept the first valid" }, { "data": "file a dict that may contain: valid on disk files keyed by their filename extension; a list of obsolete files stored under the key obsolete; a list of files remaining in the directory, reverse sorted, stored under the key files. Take whats in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid. partition_dir absolute path to partition dir containing hashes.pkl and hashes.invalid a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Construct the path to a device without checking if it is mounted. device name of target device full path to the device Return the path to a device, first checking to see if either it is a proper mount point, or at least a directory depending on the mount_check configuration option. device name of target device mount_check whether or not to check mountedness of device. Defaults to bool(self.mount_check). full path to the device, None if the path to the device is not a proper mount point or directory. Returns a BaseDiskFile instance for an object based on the objects partition, path parts and policy. device name of target device partition partition on device in which the object lives account account name for the object container container name for the object obj object name for the object policy the StoragePolicy instance Returns a tuple of (a DiskFile instance for an object at the given object_hash, the basenames of the files in the objects hash dir). Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead. device name of target device partition partition on the device in which the object lives object_hash the hash of an object path policy the StoragePolicy instance DiskFileNotExist if the object does not exist a tuple comprising (an instance of BaseDiskFile, a list of file basenames) Returns a BaseDiskFile instance for an object at the given AuditLocation. audit_location object location to be audited Returns a DiskFile instance for an object at the given object_hash. Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead. device name of target device partition partition on the device in which the object lives object_hash the hash of an object path policy the StoragePolicy instance DiskFileNotExist if the object does not exist an instance of BaseDiskFile device name of target device partition partition name suffixes a list of suffix directories to be recalculated policy the StoragePolicy instance skip_rehash just mark the suffixes dirty; return None a dictionary that maps suffix directories Given a simple list of files names, determine the files that constitute a valid fileset i.e. a set of files that defines the state of an object, and determine the files that are obsolete and could be deleted. Note that some files may fall into neither category. If a file is considered part of a valid fileset then its info dict will be added to the results dict, keyed by <extension>_info. Any files that are no longer required will have their info dicts added to a list stored under the key obsolete. The results dict will always contain entries with keys ts_file, datafile and metafile. Their values will be the fully qualified path to a file of the corresponding type if there is such a file in the valid fileset, or" }, { "data": "files a list of file names. datadir directory name files are from; this is used to construct file paths in the results, but the datadir is not modified by this method. verify if True verify that the ondisk file contract has not been violated, otherwise do not verify. policy storage policy used to store the files. Used to validate fragment indexes for EC policies. ts_file -> path to a .ts file or None data_file -> path to a .data file or None meta_file -> path to a .meta file or None ctype_file -> path to a .meta file or None ts_info -> a file info dict for a .ts file data_info -> a file info dict for a .data file meta_info -> a file info dict for a .meta file ctype_info -> a file info dict for a .meta file which contains the content-type value unexpected -> a list of file paths for unexpected files possible_reclaim -> a list of file info dicts for possible reclaimable files obsolete -> a list of file info dicts for obsolete files Invalidates the hash for a suffix_dir in the partitions hashes file. suffix_dir absolute path to suffix dir whose hash needs invalidating Returns filename for given timestamp. timestamp the object timestamp, an instance of Timestamp ext an optional string representing a file extension to be appended to the returned file name ctype_timestamp an optional content-type timestamp, an instance of Timestamp a file name Yield an AuditLocation for all objects stored under device_dirs. policy the StoragePolicy instance device_dirs directory of target device auditor_type either ALL or ZBF Parse an on disk file name. filename the file name including extension policy storage policy used to store the file a dict, with keys for timestamp, ext and ctype_timestamp: timestamp is a Timestamp ctype_timestamp is a Timestamp or None for .meta files, otherwise None ext is a string, the file extension including the leading dot or the empty string if the filename has no extension. Subclasses may override this method to add further keys to the returned dict. DiskFileError if any part of the filename is not able to be validated. A context manager that will lock on the partition given. device device targeted by the lock request policy policy targeted by the lock request partition partition targeted by the lock request PartitionLockTimeout If the lock on the partition cannot be granted within the configured timeout. Write data describing a container update notification to a pickle file in the async_pending directory. device name of target device account account name for the object container container name for the object obj object name for the object data update data to be written to pickle file timestamp a Timestamp policy the StoragePolicy instance In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it. The path to the device the corrupted file is on. The path to the file you want quarantined. path (str) of directory the file was moved to OSError re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename A context manager that will lock on the partition and, if configured to do so, on the device given. device name of target device policy policy targeted by the replication request partition partition targeted by the replication request ReplicationLockTimeout If the lock on the device cannot be granted within the configured timeout. Yields tuples of (hash_only, timestamps) for object information stored for the given device, partition, and (optionally)" }, { "data": "If suffixes is None, all stored suffixes will be searched for object hashes. Note that if suffixes is not None but empty, such as [], then nothing will be yielded. timestamps is a dict which may contain items mapping: ts_data -> timestamp of data or tombstone file, ts_meta -> timestamp of meta file, if one exists content-type value, if one exists durable -> True if data file at ts_data is durable, False otherwise where timestamps are instances of Timestamp device name of target device partition partition name policy the StoragePolicy instance suffixes optional list of suffix directories to be searched Yields tuples of (fullpath, suffixonly) for suffixes stored on the given device and partition. device name of target device partition partition name policy the StoragePolicy instance Bases: object Encapsulation of the WSGI read context for servicing GET REST API requests. Serves as the context manager object for the swift.obj.diskfile.DiskFile classs swift.obj.diskfile.DiskFile.reader() method. Note The quarantining behavior of this method is considered implementation specific, and is not required of the API. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. fp open file object pointer reference data_file on-disk data file name for the object obj_size verified on-disk size of the object etag expected metadata etag value for entire file diskchunksize size of reads from disk in bytes keepcachesize maximum object size that will be kept in cache device_path on-disk device path, used when quarantining an obj logger logger caller wants this object to use quarantine_hook 1-arg callable called w/reason when quarantined use_splice if true, use zero-copy splice() to send data pipe_size size of pipe buffer used in zero-copy operations diskfile the diskfile creating this DiskFileReader instance keep_cache should resulting reads be kept in the buffer cache cooperative_period the period parameter when does cooperative yielding during file read Returns an iterator over the data file for range (start, stop) Returns an iterator over the data file for a set of ranges Close the open file handle if present. For this specific implementation, this method will handle quarantining the file if necessary. Does some magic with splice() and tee() to move stuff from disk to network without ever touching userspace. wsockfd file descriptor (integer) of the socket out which to send data Bases: object Encapsulation of the write context for servicing PUT REST API requests. Serves as the context manager object for the swift.obj.diskfile.DiskFile classs swift.obj.diskfile.DiskFile.create() method. Note It is the responsibility of the swift.obj.diskfile.DiskFile.create() method context manager to close the open file descriptor. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. name name of object from REST API datadir on-disk directory object will end up in on swift.obj.diskfile.DiskFileWriter.put() fd open file descriptor of temporary file to receive data tmppath full path name of the opened file descriptor bytespersync number bytes written between sync calls diskfile the diskfile creating this DiskFileWriter instance nextpartpower the next partition power to be used extension the file extension to be used; may be used internally to distinguish between PUT/POST/DELETE operations Expose internal stats about written chunks. a tuple, (upload_size, etag) Perform any operations necessary to mark the object as durable. For replication policy type this is a no-op. timestamp object put timestamp, an instance of Timestamp Finalize writing the file on disk. metadata dictionary of metadata to be associated with the object Write a chunk of data to" }, { "data": "All invocations of this method must come before invoking the :func: For this implementation, the data is written into a temporary file. chunk the chunk of data to write as a string object Bases: BaseDiskFile alias of DiskFileReader alias of DiskFileWriter Bases: BaseDiskFileManager alias of DiskFile Bases: BaseDiskFileReader Bases: object Bases: BaseDiskFileWriter Finalize writing the file on disk. metadata dictionary of metadata to be associated with the object Bases: BaseDiskFile Provides the timestamp of the newest durable file found in the object directory. A Timestamp instance, or None if no durable file was found. DiskFileNotOpen if the open() method has not been previously called on this instance. Provides information about all fragments that were found in the object directory, including fragments without a matching durable file, and including any fragment chosen to construct the opened diskfile. A dict mapping <Timestamp instance> -> <list of frag indexes>, or None if the diskfile has not been opened or no fragments were found. Remove a tombstone file matching the specified timestamp or datafile matching the specified timestamp and fragment index from the object directory. This provides the EC reconstructor/ssync process with a way to remove a tombstone or fragment from a handoff node after reverting it to its primary node. The hash will be invalidated, and if empty the hsh_path will be removed immediately. timestamp the object timestamp, an instance of Timestamp frag_index fragment archive index, must be a whole number or None. nondurablepurgedelay only remove a non-durable data file if its been on disk longer than this many seconds. meta_timestamp if not None then remove any meta file with this timestamp alias of ECDiskFileReader alias of ECDiskFileWriter Bases: BaseDiskFileManager alias of ECDiskFile Returns the EC specific filename for given timestamp. timestamp the object timestamp, an instance of Timestamp ext an optional string representing a file extension to be appended to the returned file name frag_index a fragment archive index, used with .data extension only, must be a whole number. ctype_timestamp an optional content-type timestamp, an instance of Timestamp durable if True then include a durable marker in data filename. a file name DiskFileError if ext==.data and the kwarg frag_index is not a whole number Returns timestamp(s) and other info extracted from a policy specific file name. For EC policy the data file name includes a fragment index and possibly a durable marker, both of which must be stripped off to retrieve the timestamp. filename the file name including extension ctype_timestamp: timestamp is a Timestamp frag_index is an int or None ctype_timestamp is a Timestamp or None for .meta files, otherwise None ext is a string, the file extension including the leading dot or the empty string if the filename has no extension durable is a boolean that is True if the filename is a data file that includes a durable marker DiskFileError if any part of the filename is not able to be validated. Return int representation of frag_index, or raise a DiskFileError if frag_index is not a whole number. frag_index a fragment archive index policy storage policy used to validate the index against Bases: BaseDiskFileReader Bases: BaseDiskFileWriter Finalize put by renaming the object data file to include a durable marker. We do this for EC policy because it requires a 2-phase put commit confirmation. timestamp object put timestamp, an instance of Timestamp DiskFileError if the diskfile frag_index has not been set (either during initialisation or a call to put()) The only difference between this method and the replication policy DiskFileWriter method is adding the frag index to the" }, { "data": "metadata dictionary of metadata to be associated with object Take whats in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid. partition_dir absolute path to partition dir containing hashes.pkl and hashes.invalid a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Extracts the policy for an object (based on the name of the objects directory) given the device-relative path to the object. Returns None in the event that the path is malformed in some way. The device-relative path is everything after the mount point; for example: 485dc017205a81df3af616d917c90179/1401811134.873649.data would have device-relative path: objects-5/30/179/485dc017205a81df3af616d917c90179/1401811134.873649.data obj_path device-relative path of an object, or the full path a BaseStoragePolicy or None Get the async dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. asyncpending or asyncpending-<N> as appropriate Get the data dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. objects or objects-<N> as appropriate Given the device path, policy, and partition, returns the full path to the partition Get the temp dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. tmp or tmp-<N> as appropriate Invalidates the hash for a suffix_dir in the partitions hashes file. suffix_dir absolute path to suffix dir whose hash needs invalidating Given a devices path (e.g. /srv/node), yield an AuditLocation for all objects stored under that directory for the given datadir (policy), if devicedirs isnt set. If devicedirs is set, only yield AuditLocation for the objects under the entries in device_dirs. The AuditLocation only knows the path to the hash directory, not to the .data file therein (if any). This is to avoid a double listdir(hash_dir); the DiskFile object will always do one, so we dont. devices parent directory of the devices to be audited datadir objects directory mount_check flag to check if a mount check should be performed on devices logger a logger object device_dirs a list of directories under devices to traverse auditor_type either ALL or ZBF In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it. The path to the device the corrupted file is on. The path to the file you want quarantined. path (str) of directory the file was moved to OSError re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename Read the existing hashes.pkl a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Helper function to read the pickled metadata from an object file. fd file descriptor or filename to load the metadata from addmissingchecksum if set and checksum is missing, add it dictionary of metadata Hard-links a file located in target_path using the second path newtargetpath. Creates intermediate directories if required. target_path current absolute filename newtargetpath new absolute filename for the hardlink ignore_missing if True then no exception is raised if the link could not be made because target_path did not exist, otherwise an OSError will be raised. OSError if the hard link could not be created, unless the intended hard link already exists or the target_path does not exist and must_exist if False. True if the link was created by the call to this method, False otherwise. Write hashes to hashes.pkl The updated key is added to hashes before it is" }, { "data": "Helper function to write pickled metadata for an object file. fd file descriptor or filename to write the metadata metadata metadata to write Bases: Daemon Replicate objects. Encapsulates most logic and data needed by the object replication process. Each call to .replicate() performs one replication pass. Its up to the caller to do this in a loop. Helper function for collect_jobs to build jobs for replication using replication style storage policy Check to see if the ring has been updated :param object_ring: the ring to check boolean indicating whether or not the ring has changed Returns a sorted list of jobs (dictionaries) that specify the partitions, nodes, etc to be rsynced. override_devices if set, only jobs on these devices will be returned override_partitions if set, only jobs on these partitions will be returned override_policies if set, only jobs in these storage policies will be returned Returns a set of all local devices in all replication-type storage policies. This is the device names, e.g. sdq or d1234 or something, not the full ring entries. For each worker yield a (possibly empty) dict of kwargs to pass along to the daemons run() method after fork. The length of elements returned from this method will determine the number of processes created. If the returned iterable is empty, the Strategy will fallback to run-inline strategy. once False if the worker(s) will be daemonized, True if the worker(s) will be run once kwargs plumbed through via command line argparser an iterable of dicts, each element represents the kwargs to be passed to a single workers run() method after fork. Loop that runs in the background during replication. It periodically logs progress. Check whether our set of local devices remains the same. If devices have been added or removed, then we return False here so that we can kill off any worker processes and then distribute the new set of local devices across a new set of workers so that all devices are, once again, being worked on. This function may also cause recon stats to be updated. False if any local devices have been added or removed, True otherwise Make sure the policys rings are loaded. policy the StoragePolicy instance appropriate ring object Override this to do something after running using multiple worker processes. This method is called in the parent process. This is probably only useful for run-once mode since there is no after running in run-forever mode. Run a replication pass High-level method that replicates a single partition that doesnt belong on this node. job a dict containing info about the partition to be replicated Uses rsync to implement the sync method. This was the first sync method in Swift. Override this to run forever Override this to run the script once Logs various stats for the currently running replication pass. Synchronize local suffix directories from a partition with a remote node. node the dev entry for the remote node to sync with job information about the partition being synced suffixes a list of suffixes which need to be pushed boolean and dictionary, boolean indicating success or failure High-level method that replicates a single partition. job a dict containing info about the partition to be replicated Bases: object Note the failure of one or more devices. failures a list of (ip, device-name) pairs that failed Bases: object Sends SSYNC requests to the object server. These requests are eventually handled by ssync_receiver and full documentation about the process is" }, { "data": "Establishes a connection and starts an SSYNC request with the object server. Closes down the connection to the object server once done with the SSYNC request. Handles the sender-side of the MISSING_CHECK step of a SSYNC request. Full documentation of this can be found at Receiver.missing_check(). Sends a DELETE subrequest with the given information. Sends a PUT subrequest for the url_path using the source df (DiskFile) and content_length. Handles the sender-side of the UPDATES step of an SSYNC request. Full documentation of this can be found at Receiver.updates(). Bases: BufferedHTTPConnection alias of SsyncBufferedHTTPResponse Bases: BufferedHTTPResponse, object Reads a line from the SSYNC response body. httplib has no readline and will block on read(x) until x is read, so we have to do the work ourselves. A bit of this is taken from Pythons httplib itself. Parse missing_check line parts to determine which parts of local diskfile were wanted by the receiver. The encoder for parts is encode_wanted() Returns a string representing the object hash, its data file timestamp, the delta forwards to its metafile and content-type timestamps, if non-zero, and its durability, in the form: <hash> <tsdata> [m:<hex delta to tsmeta>[,t:<hex delta to ts_ctype>] [,durable:False] The decoder for this line is decode_missing() Bases: object Handles incoming SSYNC requests to the object server. These requests come from the object-replicator daemon that uses ssync_sender. The number of concurrent SSYNC requests is restricted by use of a replication_semaphore and can be configured with the object-server.conf [object-server] replication_concurrency setting. An SSYNC request is really just an HTTP conduit for sender/receiver replication communication. The overall SSYNC request should always succeed, but it will contain multiple requests within its request and response bodies. This hack is done so that replication concurrency can be managed. The general process inside an SSYNC request is: Initialize the request: Basic request validation, mount check, acquire semaphore lock, etc.. Missing check: Sender sends the hashes and timestamps of the object information it can send, receiver sends back the hashes it wants (doesnt have or has an older timestamp). Updates: Sender sends the object information requested. Close down: Release semaphore lock, etc. Basic validation of request and mount check. This function will be called before attempting to acquire a replication semaphore lock, so contains only quick checks. Handles the receiver-side of the MISSING_CHECK step of a SSYNC request. Receives a list of hashes and timestamps of object information the sender can provide and responds with a list of hashes desired, either because theyre missing or have an older timestamp locally. The process is generally: Sender sends :MISSING_CHECK: START and begins sending hash timestamp lines. Receiver gets :MISSING_CHECK: START and begins reading the hash timestamp lines, collecting the hashes of those it desires. Sender sends :MISSING_CHECK: END. Receiver gets :MISSING_CHECK: END, responds with :MISSING_CHECK: START, followed by the list of <wanted_hash> specifiers it collected as being wanted (one per line), :MISSING_CHECK: END, and flushes any buffers. Each <wanted_hash> specifier has the form <hash>[ <parts>] where <parts> is a string containing characters d and/or m indicating that only data or meta part of object respectively is required to be syncd. Sender gets :MISSING_CHECK: START and reads the list of hashes desired by the receiver until reading :MISSING_CHECK: END. The collection and then response is so the sender doesnt have to read while it writes to ensure network buffers dont fill up and block everything. Handles the UPDATES step of an SSYNC request. Receives a set of PUT and DELETE subrequests that will be routed to the object server itself for" }, { "data": "These contain the information requested by the MISSING_CHECK step. The PUT and DELETE subrequests are formatted pretty much exactly like regular HTTP requests, excepting the HTTP version on the first request line. The process is generally: Sender sends :UPDATES: START and begins sending the PUT and DELETE subrequests. Receiver gets :UPDATES: START and begins routing the subrequests to the object server. Sender sends :UPDATES: END. Receiver gets :UPDATES: END and sends :UPDATES: START and :UPDATES: END (assuming no errors). Sender gets :UPDATES: START and :UPDATES: END. If too many subrequests fail, as configured by replicationfailurethreshold and replicationfailureratio, the receiver will hang up the request early so as to not waste any more time. At step 4, the receiver will send back an error if there were any failures (that didnt cause a hangup due to the above thresholds) so the sender knows the whole was not entirely a success. This is so the sender knows if it can remove an out of place partition, for example. Bases: Exception Parse a string of the form generated by encode_missing() and return a dict with keys objecthash, tsdata, tsmeta, tsctype, durable. The encoder for this line is encode_missing() Compare a remote and local results and generate a wanted line. remote a dict, with tsdata and tsmeta keys in the form returned by decode_missing() local a dict, possibly empty, with tsdata and tsmeta keys in the form returned Receiver.checklocal() The decoder for this line is decode_wanted() Bases: Daemon Reconstruct objects using erasure code. And also rebalance EC Fragment Archive objects off handoff nodes. Encapsulates most logic and data needed by the object reconstruction process. Each call to .reconstruct() performs one pass. Its up to the caller to do this in a loop. Aggregate per-disk rcache updates from child workers. Helper function for collect_jobs to build jobs for reconstruction using EC style storage policy N.B. If this function ever returns an empty list of jobs the entire partition will be deleted. Check to see if the ring has been updated object_ring the ring to check boolean indicating whether or not the ring has changed Helper for getting partitions in the top level reconstructor In handoffs_only mode primary partitions will not be included in the returned (possibly empty) list. For EC we can potentially revert only some of a partition so well delete reverted objects here. Note that we delete the fragment index of the file we sent to the remote node. job the job being processed objects a dict of objects to be deleted, each entry maps hash=>timestamp In testing, the pool.waitall() call very occasionally failed to return. This is an attempt to make sure the reconstructor finishes its reconstruction pass in some eventuality. Add stats for this workers run to recon cache. When in worker mode (perdiskstats == True) this workers stats are added per device instead of in the top level keys (aggregation is serialized in the parent process). total the runtime of cycle in minutes override_devices (optional) list of device that are being reconstructed Returns a set of all local devices in all EC policies. Compare the local suffix hashes with the remote suffix hashes for the given local and remote fragment indexes. Return those suffixes which should be" }, { "data": "localsuff the local suffix hashes (from get_hashes) local_index the local fragment index for the job remote_suff the remote suffix hashes (from remote REPLICATE request) remote_index the remote fragment index for the job a list of strings, the suffix dirs to sync Take the set of all local devices for this node from all the EC policies rings, and distribute them evenly into the number of workers to be spawned according to the configured worker count. If devices is given in kwargs then distribute only those devices. once False if the worker(s) will be daemonized, True if the worker(s) will be run once kwargs optional overrides from the command line Loop that runs in the background during reconstruction. It periodically logs progress. Check whether rings have changed, and maybe do a recon update. False if any ec ring has changed Utility function that kills all coroutines currently running. Make sure the policys rings are loaded. policy the StoragePolicy instance appropriate ring object Turn a set of connections from backend object servers into a generator that yields up the rebuilt fragment archive for frag_index. Override this to do something after running using multiple worker processes. This method is called in the parent process. This is probably only useful for run-once mode since there is no after running in run-forever mode. Sync the local partition with the remote node(s) according to the parameters of the job. For primary nodes, the SYNC job type will define both left and right hand sync_to nodes to ssync with as defined by this primary nodes index in the node list based on the fragment index found in the partition. For non-primary nodes (either handoff revert, or rebalance) the REVERT job will define a single node in sync_to which is the proper/new home for the fragment index. N.B. ring rebalancing can be time consuming and handoff nodes fragment indexes do not have a stable order, its possible to have more than one REVERT job for a partition, and in some rare failure conditions there may even also be a SYNC job for the same partition - but each one will be processed separately because each job will define a separate list of node(s) to sync_to. job the job dict, with the keys defined in getjob_info Run a reconstruction pass Reconstructs a fragment archive - this method is called from ssync after a remote node responds that is missing this object - the local diskfile is opened to provide metadata - but to reconstruct the missing fragment archive we must connect to multiple object servers. job job from ssync_sender. node node to which were rebuilding. df an instance of BaseDiskFile. a DiskFile like class for use by ssync. DiskFileQuarantined if the fragment archive cannot be reconstructed and has as a result been quarantined. DiskFileError if the fragment archive cannot be reconstructed. Override this to run forever Override this to run the script once Logs various stats for the currently running reconstruction pass. Bases: object This class wraps the reconstructed fragment archive data and metadata in the DiskFile interface for ssync. Bases: object Encapsulates fragment GET response data related to a single timestamp. Object Server for Swift Bases: bytes Eventlet wont send headers until its accumulated at least eventlet.wsgi.MINIMUMCHUNKSIZE bytes or the app iter is exhausted. If we want to send the response body behind Eventlets back, perhaps with some zero-copy wizardry, then we have to unclog the plumbing in eventlet.wsgi to force the headers out, so we use an EventletPlungerString to empty out all of Eventlets buffers. Bases: BaseStorageServer Implements the WSGI application for the Swift Object Server. Handle HTTP DELETE requests for the Swift Object Server. Handle HTTP GET requests for the Swift Object Server. Handle HTTP HEAD requests for the Swift Object" }, { "data": "Handle HTTP POST requests for the Swift Object Server. Handle HTTP PUT requests for the Swift Object Server. Handle REPLICATE requests for the Swift Object Server. This is used by the object replicator to get hashes for directories. Note that the name REPLICATE is preserved for historical reasons as this verb really just returns the hashes information for the specified parameters and is used, for example, by both replication and EC. Sends or saves an async update. op operation performed (ex: PUT, or DELETE) account account name for the object container container name for the object obj object name host host that the container is on partition partition that the container is on contdevice device name that the container is on headers_out dictionary of headers to send in the container request objdevice device name that the object is in policy the associated BaseStoragePolicy instance loggerthreadlocals The thread local values to be set on the self.logger to retain transaction logging information. container_path optional path in the form <account/container> to which the update should be sent. If given this path will be used instead of constructing a path from the account and container params. Update the container when objects are updated. op operation performed (ex: PUT, or DELETE) account account name for the object container container name for the object obj object name request the original request object driving the update headers_out dictionary of headers to send in the container request(s) objdevice device name that the object is in policy the BaseStoragePolicy instance Update the expiring objects container when objects are updated. op operation performed (ex: PUT, or DELETE) delete_at scheduled delete in UNIX seconds, int account account name for the object container container name for the object obj object name request the original request driving the update objdevice device name that the object is in policy the BaseStoragePolicy instance (used for tmp dir) Utility method for instantiating a DiskFile object supporting a given REST API. An implementation of the object server that wants to use a different DiskFile class would simply over-ride this method to provide that behavior. Implementation specific setup. This method is called at the very end by the constructor to allow a specific implementation to modify existing attributes or add its own attributes. conf WSGI configuration parameter paste.deploy app factory for creating WSGI object server apps Read and discard any bytes from file_like. file_like file-like object to read from read_size how big a chunk to read at a time timeout how long to wait for a read (use None for no timeout) ChunkReadTimeout if no chunk was read in time Split and validate path for an object. request a swob request a tuple of path parts and storage policy Callback for swift.common.wsgi.runwsgi during the globalconf creation so that we can add our replication_semaphore, used to limit the number of concurrent SSYNC_REQUESTS across all workers. preloadedappconf The preloaded conf for the WSGI app. This conf instance will go away, so just read from it, dont write. global_conf The global conf that will eventually be passed to the app_factory function later. This conf is created before the worker subprocesses are forked, so can be useful to set up semaphores, shared memory, etc. Bases: object Wrap an iterator to rate-limit updates on a per-bucket basis, where updates are mapped to buckets by hashing their destination path. If an update is rate-limited then it is placed on a deferral queue and may be sent later if the wrapped iterator is exhausted before the drain_until time is" }, { "data": "The deferral queue has constrained size and once the queue is full updates are evicted using a first-in-first-out policy. This policy is used because updates on the queue may have been made obsolete by newer updates written to disk, and this is more likely for updates that have been on the queue longest. The iterator increments stats as follows: The deferrals stat is incremented for each update that is rate-limited. Note that a individual update is rate-limited at most once. The skips stat is incremented for each rate-limited update that is not eventually yielded. This includes updates that are evicted from the deferral queue and all updates that remain in the deferral queue when drain_until time is reached and the iterator terminates. The drains stat is incremented for each rate-limited update that is eventually yielded. Consequently, when this iterator terminates, the sum of skips and drains is equal to the number of deferrals. updateiterable an asyncpending update iterable logger a logger instance stats a SweepStats instance num_buckets number of buckets to divide container hashes into, the more buckets total the less containers to a bucket (once a busy container slows down a bucket the whole bucket starts deferring) maxelementspergroupper_second tunable, when deferring kicks in maxdeferredelements maximum number of deferred elements before skipping starts. Each bucket may defer updates, but once the total number of deferred updates summed across all buckets reaches this value then all buckets will skip subsequent updates. drain_until time at which any remaining deferred elements must be skipped and the iterator stops. Once the wrapped iterator has been exhausted, this iterator will drain deferred elements from its buckets until either all buckets have drained or this time is reached. Bases: Daemon Update object information in container listings. Get the container ring. Load it, if it hasnt been yet. If there are async pendings on the device, walk each one and update. device path to device Perform the object update to the container node node dictionary from the container ring part partition that holds the container op operation performed (ex: PUT or DELETE) obj object name being updated headers_out headers to send with the update a tuple of (success, node_id, redirect) where success is True if the update succeeded, node_id is the_id of the node updated and redirect is either None or a tuple of (a path, a timestamp string). Process the object information to be updated and update. update_path path to pickled object update file device path to device policy storage policy of object update update the un-pickled update data kwargs un-used keys from update_ctx Run the updater continuously. Run the updater once. Bases: EventletRateLimiter Extends EventletRateLimiter to also maintain a deque of items that have been deferred due to rate-limiting, and to provide a comparator for sorting instanced by readiness. Bases: object Stats bucket for an update sweep A measure of the rate at which updates are being rate-limited is: ``` deferrals / (deferrals + successes + failures - drains) ``` A measure of the rate at which updates are not being sent during a sweep is: ``` skips / (skips + successes + failures) ``` Split the account and container parts out of the async update data. N.B. updates to shards set the container_path key while the account and container keys are always the root. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "misc.html#constraints.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: DatabaseAuditor Audit containers. alias of ContainerBroker Pluggable Back-ends for Container Server Bases: DatabaseBroker Encapsulates working with a container database. Note that this may involve multiple on-disk DB files if the container becomes sharded: dbfile is the path to the legacy container DB name, i.e. <hash>.db. This file should exist for an initialised broker that has never been sharded, but will not exist once a container has been sharded. db_files is a list of existing db files for the broker. This list should have at least one entry for an initialised broker, and should have two entries while a broker is in SHARDING state. db_file is the path to whichever db is currently authoritative for the container. Depending on the containers state, this may not be the same as the dbfile argument given to init_(), unless forcedbfile is True in which case db_file is always equal to the dbfile argument given to init_(). pendingfile is always equal to db_file extended with .pending, i.e. <hash>.db.pending. Create a ContainerBroker instance. If the db doesnt exist, initialize the db file. device_path device path part partition number account account name string container container name string logger a logger instance epoch a timestamp to include in the db filename put_timestamp initial timestamp if broker needs to be initialized storagepolicyindex the storage policy index a tuple of (broker, initialized) where broker is an instance of swift.container.backend.ContainerBroker and initialized is True if the db file was initialized, False otherwise. Create the container_info table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. Also creates the container_stat view. conn DB connection object put_timestamp put timestamp storagepolicyindex storage policy index Create the object table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object Create policy_stat table. conn DB connection object storagepolicyindex the policy_index the container is being created with Create the shard_range table which is specific to the container DB. conn DB connection object Get the path to the primary db file for this broker. This is typically the db file for the most recent sharding epoch. However, if no db files exist on disk, or if forcedbfile was True when the broker was constructed, then the primary db file is the file passed to the broker constructor. A path to a db file; the file does not necessarily exist. Gets the cached list of valid db files that exist on disk for this broker. reloaddbfiles(). A list of paths to db files ordered by ascending epoch; the list may be empty. Mark an object deleted. name object name to be deleted timestamp timestamp when the object was marked as deleted storagepolicyindex the storage policy index for the object Check if container DB is empty. This method uses more stringent checks on object count than is_deleted(): this method checks that there are no objects in any policy; if the container is in the process of sharding then both fresh and retiring databases are checked to be empty; if a root container has shard ranges then they are checked to be empty. True if the database has no active objects, False otherwise Updates this brokers own shard range with the given epoch, sets its state to SHARDING and persists it in the" }, { "data": "epoch a Timestamp the brokers updated own shard range. Scans the container db for shard ranges. Scanning will start at the upper bound of the any existing_ranges that are given, otherwise at ShardRange.MIN. Scanning will stop when limit shard ranges have been found or when no more shard ranges can be found. In the latter case, the upper bound of the final shard range will be equal to the upper bound of the container namespace. This method does not modify the state of the db; callers are responsible for persisting any shard range data in the db. shard_size the size of each shard range limit the maximum number of shard points to be found; a negative value (default) implies no limit. existing_ranges an optional list of existing ShardRanges; if given, this list should be sorted in order of upper bounds; the scan for new shard ranges will start at the upper bound of the last existing ShardRange. minimumshardsize Minimum size of the final shard range. If this is greater than one then the final shard range may be extended to more than shard_size in order to avoid a further shard range with less minimumshardsize rows. a tuple; the first value in the tuple is a list of dicts each having keys {index, lower, upper, object_count} in order of ascending upper; the second value in the tuple is a boolean which is True if the last shard range has been found, False otherwise. Returns a list of all shard range data, including own shard range and deleted shard ranges. A list of dict representations of a ShardRange. Return a list of brokers for component dbs. The list has two entries while the db state is sharding: the first entry is a broker for the retiring db with skip_commits set to True; the second entry is a broker for the fresh db with skip_commits set to False. For any other db state the list has one entry. a list of ContainerBroker Returns the current state of on disk db files. Get global data for the container. dict with keys: account, container, created_at, puttimestamp, deletetimestamp, status, statuschangedat, objectcount, bytesused, reportedputtimestamp, reporteddeletetimestamp, reportedobjectcount, reportedbytesused, hash, id, xcontainersync_point1, xcontainersyncpoint2, and storagepolicy_index, db_state. Get the is_deleted status and info for the container. a tuple, in the form (info, is_deleted) info is a dict as returned by getinfo and isdeleted is a boolean. Get a list of objects which are in a storage policy different from the containers storage policy. start last reconciler sync point count maximum number of entries to get list of dicts with keys: name, created_at, size, contenttype, etag, storagepolicy_index Returns a list of persisted namespaces per input parameters. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored. reverse reverse the result order. states if specified, restricts the returned list to namespaces that have one of the given states; should be a list of" }, { "data": "fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. a list of Namespace objects. Returns a list of objects, including deleted objects, in all policies. Each object in the list is described by a dict with keys {name, createdat, size, contenttype, etag, deleted, storagepolicyindex}. limit maximum number of entries to get marker if set, objects with names less than or equal to this value will not be included in the list. end_marker if set, objects with names greater than or equal to this value will not be included in the list. include_deleted if True, include only deleted objects; if False, include only undeleted objects; otherwise (default), include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. a list of dicts, each describing an object. Returns a shard range representing this brokers own shard range. If no such range has been persisted in the brokers shard ranges table then a default shard range representing the entire namespace will be returned. The objectcount and bytesused of the returned shard range are not guaranteed to be up-to-date with the current object stats for this broker. Callers that require up-to-date stats should use the get_info method. no_default if True and the brokers own shard range is not found in the shard ranges table then None is returned, otherwise a default shard range is returned. an instance of ShardRange Get information about the DB required for replication. dict containing keys from getinfo plus maxrow and metadata count and metadata is the raw string. Returns a list of persisted shard ranges. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored, but other constraints are applied (e.g. exclude_others and include_deleted). reverse reverse the result order. include_deleted include items that have the delete marker set. states if specified, restricts the returned list to shard ranges that have one of the given states; should be a list of ints. include_own boolean that governs whether the row whose name matches the brokers path is included in the returned list. If True, that row is included unless it is excluded by other constraints (e.g. marker, end_marker, includes). If False, that row is not included. Default is False. exclude_others boolean that governs whether the rows whose names do not match the brokers path are included in the returned list. If True, those rows are not included, otherwise they are included. Default is False. fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. fill_gaps is ignored if includes is" }, { "data": "a list of instances of swift.common.utils.ShardRange. Get the aggregate object stats for all shard ranges in states ACTIVE, SHARDING or SHRINKING. a dict with keys {bytesused, objectcount} Returns sharding specific info from the brokers metadata. key if given the value stored under key in the sharding info will be returned. either a dict of sharding info or the value stored under key in that dict. Returns sharding specific info from the brokers metadata with timestamps. key if given the value stored under key in the sharding info will be returned. a dict of sharding info with their timestamps. This function tells if there is any shard range other than the brokers own shard range, that is not marked as deleted. A boolean value as described above. Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age. Returns True if this container is a root container, False otherwise. A root container is a container that is not a shard of another container. Get a list of objects sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix. limit maximum number of entries to get marker marker query end_marker end marker query prefix prefix query delimiter delimiter for query path if defined, will set the prefix and delimiter based on the path storagepolicyindex storage policy index for query reverse reverse the result order. include_deleted if True, include only deleted objects; if False (default), include only undeleted objects; otherwise, include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. transform_func an optional function that if given will be called for each object to get a transformed version of the object to include in the listing; should have same signature as transformrecord(); defaults to transformrecord(). all_policies if True, include objects for all storage policies ignoring any value given for storagepolicyindex allow_reserved exclude names with reserved-byte by default list of tuples of (name, createdat, size, contenttype, etag, deleted) Turn this db record dict into the format this service uses for pending pickles. Merge items into the object table. itemlist list of dictionaries of {name, createdat, size, content_type, etag, deleted, storagepolicyindex, ctype_timestamp, meta_timestamp} source if defined, update incoming_sync with the source Merge shard ranges into the shard range table. shard_ranges a shard range or a list of shard ranges; each shard range should be an instance of ShardRange or a dict representation of a shard range having SHARDRANGEKEYS. Creates an object in the DB with its metadata. name object name to be created timestamp timestamp of when the object was created size object size content_type object content-type etag object etag deleted if True, marks the object as deleted and sets the deleted_at timestamp to timestamp storagepolicyindex the storage policy index for the object ctypetimestamp timestamp of when contenttype was last updated meta_timestamp timestamp of when metadata was last updated Reloads the cached list of valid on disk db files for this broker. Removes object records in the given namespace range from the object table. Note that objects are removed regardless of their" }, { "data": "lower defines the lower bound of object names that will be removed; names greater than this value will be removed; names less than or equal to this value will not be removed. upper defines the upper bound of object names that will be removed; names less than or equal to this value will be removed; names greater than this value will not be removed. The empty string is interpreted as there being no upper bound. maxrow if specified only rows less than or equal to maxrow will be removed Update reported stats, available with containers get_info. puttimestamp puttimestamp to update deletetimestamp deletetimestamp to update objectcount objectcount to update bytesused bytesused to update Given a list of values each of which may be the name of a state, the number of a state, or an alias, return the set of state numbers described by the list. The following alias values are supported: listing maps to all states that are considered valid when listing objects; updating maps to all states that are considered valid for redirecting an object update; auditing maps to all states that are considered valid for a shard container that is updating its own shard range table from a root (this currently maps to all states except FOUND). states a list of values each of which may be the name of a state, the number of a state, or an alias a set of integer state numbers, or None if no states are given ValueError if any value in the given list is neither a valid state nor a valid alias Unlinks the brokers retiring DB file. True if the retiring DB was successfully unlinked, False otherwise. Creates and initializes a fresh DB file in preparation for sharding a retiring DB. The brokers own shard range must have an epoch timestamp for this method to succeed. True if the fresh DB was successfully created, False otherwise. Updates the brokers metadata stored under the given key prefixed with a sharding specific namespace. key metadata key in the sharding metadata namespace. value metadata value Update the containerstat policyindex and statuschangedat. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated, False otherwise. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated but has not yet completed sharding, False otherwise. Compares sharddata with existing and updates sharddata with any items of existing that take precedence over the corresponding item in shard_data. shard_data a dict representation of shard range that may be modified by this method. existing a dict representation of shard range. True if shard data has any item(s) that are considered to take precedence over the corresponding item in existing Compares new and existing shard ranges, updating the new shard ranges with any more recent state from the existing, and returns shard ranges sorted into those that need adding because they contain new or updated state and those that need deleting because their state has been superseded. newshardranges a list of dicts, each of which represents a shard range. existingshardranges a dict mapping shard range names to dicts representing a shard range. a tuple (toadd, todelete); to_add is a list of dicts, each of which represents a shard range that is to be added to the existing shard ranges; to_delete is a set of shard range names that are to be" }, { "data": "Compare the data and meta related timestamps of a new object item with the timestamps of an existing object record, and update the new item with data and/or meta related attributes from the existing record if their timestamps are newer. The multiple timestamps are encoded into a single string for storing in the created_at column of the objects db table. new_item A dict of object update attributes existing A dict of existing object attributes True if any attributes of the new item dict were found to be newer than the existing and therefore not updated, otherwise False implying that the updated item is equal to the existing. Bases: Replicator alias of ContainerBroker Cleanup non primary database from disk if needed. broker the broker for the database were replicating orig_info snapshot of the broker replication info dict taken before replication responses a list of boolean success values for each replication request to other nodes returns False if deletion of the database was attempted but unsuccessful, otherwise returns True. Ensure that reconciler databases are only cleaned up at the end of the replication run. Look for object rows for objects updates in the wrong storage policy in broker with a ROWID greater than the rowid given as point. broker the container broker with misplaced objects point the last verified reconcilersyncpoint the last successful enqueued rowid Add queue entries for rows in item_list to the local reconciler container database. container the name of the reconciler container item_list the list of rows to enqueue True if successfully enqueued Find a device in the ring that is on this node on which to place a partition. Preference is given to a device that is a primary location for the partition. If no such device is found then a local device with weight is chosen, and failing that any local device. part a partition a node entry from the ring Get a local instance of the reconciler container broker that is appropriate to enqueue the given timestamp. timestamp the timestamp of the row to be enqueued a local reconciler broker Ensure any items merged to reconciler containers during replication are pushed out to correct nodes and any reconciler containers that do not belong on this node are removed. Run a replication pass once. Bases: ReplicatorRpc If broker has ownshardrange with an epoch then filter out an ownshardrange without an epoch, and log a warning about it. shards a list of candidate ShardRanges to merge broker a ContainerBroker logger a logger source string to log as source of shards a list of ShardRanges to actually merge Bases: BaseStorageServer WSGI Controller for the container server. Handle HTTP DELETE request. Handle HTTP GET request. The body of the response to a successful GET request contains a listing of either objects or shard ranges. The exact content of the listing is determined by a combination of request headers and query string parameters, as follows: The type of the listing is determined by the X-Backend-Record-Type header. If this header has value shard then the response body will be a list of shard ranges; if this header has value auto, and the container state is sharding or sharded, then the listing will be a list of shard ranges; otherwise the response body will be a list of objects. Both shard range and object listings may be filtered according to the constraints described" }, { "data": "However, the X-Backend-Ignore-Shard-Name-Filter header may be used to override the application of the marker, end_marker, includes and reverse parameters to shard range listings. These parameters will be ignored if the header has the value sharded and the current db sharding state is also sharded. Note that this header does not override the states constraint on shard range listings. The order of both shard range and object listings may be reversed by using a reverse query string parameter with a value in swift.common.utils.TRUE_VALUES. Both shard range and object listings may be constrained to a name range by the marker and end_marker query string parameters. Object listings will only contain objects whose names are greater than any marker value and less than any end_marker value. Shard range listings will only contain shard ranges whose namespace is greater than or includes any marker value and is less than or includes any end_marker value. Shard range listings may also be constrained by an includes query string parameter. If this parameter is present the listing will only contain shard ranges whose namespace includes the value of the parameter; any marker or end_marker parameters are ignored The length of an object listing may be constrained by the limit parameter. Object listings may also be constrained by prefix, delimiter and path query string parameters. Shard range listings will include deleted shard ranges if and only if the X-Backend-Include-Deleted header value is one of swift.common.utils.TRUE_VALUES. Object listings never include deleted objects. Shard range listings may be constrained to include only shard ranges whose state is specified by a query string states parameter. If present, the states parameter should be a comma separated list of either the string or integer representation of STATES. Alias values may be used in a states parameter value. The listing alias will cause the listing to include all shard ranges in a state suitable for contributing to an object listing. The updating alias will cause the listing to include all shard ranges in a state suitable to accept an object update. If either of these aliases is used then the shard range listing will if necessary be extended with a synthesised filler range in order to satisfy the requested name range when insufficient actual shard ranges are found. Any filler shard range will cover the otherwise uncovered tail of the requested name range and will point back to the same container. The auditing alias will cause the listing to include all shard ranges in a state useful to the sharder while auditing a shard container. This alias will not cause a filler range to be added, but will cause the containers own shard range to be included in the listing. For now, auditing is only supported when X-Backend-Record-Shard-Format is full. Shard range listings can be simplified to include only Namespace only attributes (name, lower and upper) if the caller send the header X-Backend-Record-Shard-Format with value namespace as a hint that it would prefer namespaces. If this header doesnt exist or the value is full, the listings will default to include all attributes of shard ranges. But if params has includes/marker/end_marker then the response will be full shard ranges, regardless the header of X-Backend-Record-Shard-Format. The response header X-Backend-Record-Type will tell the user what type it gets back. Listings are not normally returned from a deleted container. However, the X-Backend-Override-Deleted header may be used with a value in swift.common.utils.TRUE_VALUES to force a shard range listing to be returned from a deleted container whose DB file still" }, { "data": "req an instance of swift.common.swob.Request an instance of swift.common.swob.Response Returns a list of objects in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Returns a list of persisted shard ranges or namespaces in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Handle HTTP HEAD request. Handle HTTP POST request. A POST request will update the containers put_timestamp, unless it has an X-Backend-No-Timestamp-Update header with a truthy value. req an instance of Request. Handle HTTP PUT request. Update or create container. Put object into container. Put shards into container. Handle HTTP REPLICATE request (json-encoded RPC calls for replication.) Handle HTTP UPDATE request (merge_items RPCs coming from the proxy.) Update the account server(s) with latest container info. req swob.Request object account account name container container name broker container DB broker object if all the account requests return a 404 error code, HTTPNotFound response object, if the account cannot be updated due to a malformed header, an HTTPBadRequest response object, otherwise None. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf Validate that the index supplied maps to a policy. policy index from request, or None if not present HTTPBadRequest if the supplied index is bogus ContainerSyncCluster instance for validating sync-to values. Perform mutation to container listing records that are common to all serialization formats, and returns it as a dict. Converts created time to iso timestamp. Replaces size with swift_bytes content type parameter. record object entry record modified record Return the shard_range database record as a dict, the keys will depend on the database fields provided in the record. record shard entry record, either ShardRange or Namespace. shardrecordfull boolean, when true the timestamp field is added as last_modified in iso format. dict suitable for listing responses paste.deploy app factory for creating WSGI container server apps Convert container info dict to headers. Split and validate path for a container. req a swob request a tuple of path parts as strings Split and validate path for an object. req a swob request a tuple of path parts as strings Bases: Daemon Move objects that are in the wrong storage policy. Validate source object will satisfy the misplaced object queue entry and move to destination. qpolicyindex the policy_index for the source object account the account name of the misplaced object container the container name of the misplaced object obj the name of the misplaced object q_ts the timestamp of the misplaced object path the full path of the misplaced object for logging containerpolicyindex the policy_index of the destination source_ts the timestamp of the source object sourceobjstatus the HTTP status source object request sourceobjinfo the HTTP headers of the source object request sourceobjiter the body iter of the source object request Issue a DELETE request against the destination to match the misplaced DELETE against the source. Dump stats to logger, noop when stats have been already been logged in the last minute. Issue a delete object request to the container for the misplaced object queue" }, { "data": "container the misplaced objects container obj the name of the misplaced object q_ts the timestamp of the misplaced object q_record the timestamp of the queue entry N.B. qts will normally be the same time as qrecord except when an object was manually re-enqued. Process an entry and remove from queue on success. q_container the queue container qentry the rawobj name from the q_container queue_item a parsed entry from the queue Main entry point for concurrent processing of misplaced objects. Iterate over all queue entries and delegate processing to spawned workers in the pool. Process a possibly misplaced object write request. Determine correct destination storage policy by checking with primary containers. Check source and destination, copying or deleting into destination and cleaning up the source as needed. This method wraps reconcileobject for exception handling. info a queue entry dict True to indicate the request is fully processed successfully, otherwise False. Override this to run forever Process every entry in the queue. Check if a given entry should be handled by this process. container the queue container queue_item an entry from the queue Update stats tracking for metric and emit log message. Issue a delete object request to the given storage_policy. account the account name container the container name obj the object name timestamp the timestamp of the object to delete policy_index the policy index to direct the request path the path to be used for logging Add an object to the container reconcilers queue. This will cause the container reconciler to move it from its current storage policy index to the correct storage policy index. container_ring container ring account the misplaced objects account container the misplaced objects container obj the misplaced object objpolicyindex the policy index where the misplaced object currently is obj_timestamp the misplaced objects X-Timestamp. We need this to ensure that the reconciler doesnt overwrite a newer object with an older one. op the method of the operation (DELETE or PUT) force over-write queue entries newer than obj_timestamp conn_timeout max time to wait for connection to container server response_timeout max time to wait for response from container server .misplaced_object container name, False on failure. Success means a majority of containers got the update. You have to squint to see it, but the general strategy is just: return the newest (of the recreated) return the oldest I tried cleaning it up for awhile, but settled on just writing a bunch of tests instead. Once you get an intuitive sense for the nuance here you can try and see theres a better way to spell the boolean logic but it all ends up looking sorta hairy. -1 if info is correct, 1 if remote_info is better Talk directly to the primary container servers to delete a particular object listing. Does not talk to object servers; use this only when a container entry does not actually have a corresponding object. Get the name of a container into which a misplaced object should be enqueued. The name is the objects last modified time rounded down to the nearest hour. objtimestamp a string representation of the objects createdat time from its container db row. a container name Compare remote_info to info and decide if the remote storage policy index should be used instead of ours. Translate a reconciler container listing entry to a dictionary containing the parts of the misplaced object queue" }, { "data": "obj_info an entry in an a container listing with the required keys: name, content_type, and hash a queue entry dict with the keys: qpolicyindex, account, container, obj, qop, qts, q_record, and path Bases: object Encapsulates metadata associated with the process of cleaving a retiring DB. This metadata includes: ref: The unique part of the key that is used when persisting a serialized CleavingContext as sysmeta in the DB. The unique part of the key is based off the DB id. This ensures that each context is associated with a specific DB file. The unique part of the key is included in the CleavingContext but should not be modified by any caller. cursor: the upper bound of the last shard range to have been cleaved from the retiring DB. max_row: the retiring DBs max row; this is updated to the value of the retiring DBs max_row every time a CleavingContext is loaded for that DB, and may change during the process of cleaving the DB. cleavetorow: the value of max_row at the moment when cleaving starts for the DB. When cleaving completes (i.e. the cleave cursor has reached the upper bound of the cleaving namespace), cleavetorow is compared to the current max_row: if the two values are not equal then rows have been added to the DB which may not have been cleaved, in which case the CleavingContext is reset and cleaving is re-started. lastcleaveto_row: the minimum DB row from which cleaving should select objects to cleave; this is initially set to None i.e. all rows should be cleaved. If the CleavingContext is reset then the lastcleaveto_row is set to the current value of cleavetorow, which in turn is set to the current value of max_row by a subsequent call to start. The repeated cleaving therefore only selects objects in rows greater than the lastcleaveto_row, rather than cleaving the whole DB again. ranges_done: the number of shard ranges that have been cleaved from the retiring DB. ranges_todo: the number of shard ranges that are yet to be cleaved from the retiring DB. Returns a CleavingContext tracking the cleaving progress of the given brokers DB. broker an instances of ContainerBroker An instance of CleavingContext. Returns all cleaving contexts stored in the brokers DB. broker an instance of ContainerBroker list of tuples of (CleavingContext, timestamp) Persists the serialized CleavingContext as sysmeta in the given brokers DB. broker an instances of ContainerBroker Bases: ContainerSharderConf, ContainerReplicator Shards containers. Run the container sharder until stopped. Run the container sharder once. Iterates through all object rows in srcshardrange in name order yielding them in lists of up to batch_size in length. All batches of rows that are not marked deleted are yielded before all batches of rows that are marked deleted. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. since_row include only object rows whose ROWID is greater than the given row id; by default all object rows are included. batch_size The maximum number of object rows to include in each yielded batch; defaults to cleaverowbatch_size. a generator of tuples of (list of rows, broker info dict) Iterates through all object rows in srcshardrange to place them in destination shard ranges provided by the destshardranges function. Yields tuples of (batch of object rows, destination shard range in which those object rows belong, broker" }, { "data": "If no destination shard range exists for a batch of object rows then tuples are yielded of (batch of object rows, None, broker info). This indicates to the caller that there are a non-zero number of object rows for which no destination shard range was found. Note that the same destination shard range may be referenced in more than one yielded tuple. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. destshardranges A function which should return a list of destination shard ranges sorted in the order defined by sort_key(). a generator of tuples of (object row list, shard range, broker info dict) where shard_range may be None. Bases: object Combines new and existing shard ranges based on most recent state. newshardranges a list of ShardRange instances. existingshardranges a list of ShardRange instances. a list of ShardRange instances. Update donor shard ranges to shrinking state and merge donors and acceptors to broker. broker A ContainerBroker. acceptor_ranges A list of ShardRange that are to be acceptors. donor_ranges A list of ShardRange that are to be donors; these will have their state and timestamp updated. timestamp timestamp to use when updating donor state Find sequences of shard ranges that could be compacted into a single acceptor shard range. This function does not modify shard ranges. broker A ContainerBroker. shrink_threshold the number of rows below which a shard may be considered for shrinking into another shard expansion_limit the maximum number of rows that an acceptor shard range should have after other shard ranges have been compacted into it max_shrinking the maximum number of shard ranges that should be compacted into each acceptor; -1 implies unlimited. max_expanding the maximum number of acceptors to be found (i.e. the maximum number of sequences to be returned); -1 implies unlimited. include_shrinking if True then existing compactible sequences are included in the results; default is False. A list of ShardRangeList each containing a sequence of neighbouring shard ranges that may be compacted; the final shard range in the list is the acceptor Find all pairs of overlapping ranges in the given list. shard_ranges A list of ShardRange excludeparentchild If True then overlapping pairs that have a parent-child relationship within the past time period time_period are excluded from the returned set. Default is False. time_period the specified past time period in seconds. Value of 0 means all time in the past. a set of tuples, each tuple containing ranges that overlap with each other. Returns a list of all continuous paths through the shard ranges. An individual path may not necessarily span the entire namespace, but it will span a continuous namespace without gaps. shard_ranges A list of ShardRange. A list of ShardRangeList. Find gaps in the shard ranges and pairs of shard range paths that lead to and from those gaps. For each gap a single pair of adjacent paths is selected. The concatenation of all selected paths and gaps will span the entire namespace with no overlaps. shard_ranges a list of instances of ShardRange. within_range an optional ShardRange that constrains the search space; the method will only return gaps within this range. The default is the entire namespace. A list of tuples of (startpath, gaprange, end_path) where start_path is a list of ShardRanges leading to the gap, gap_range is a ShardRange synthesized to describe the namespace gap, and end_path is a list of ShardRanges leading from the" }, { "data": "When gaps start or end at the namespace minimum or maximum bounds, startpath and endpath may be null paths that contain a single ShardRange covering either the minimum or maximum of the namespace. Transform the given sequences of shard ranges into a list of acceptors and a list of shrinking donors. For each given sequence the final ShardRange in the sequence (the acceptor) is expanded to accommodate the other ShardRanges in the sequence (the donors). The donors and acceptors are then merged into the broker. broker A ContainerBroker. sequences A list of ShardRangeList Sorts the given list of paths such that the most preferred path is the first item in the list. paths A list of ShardRangeList. shardrangeto_span An instance of ShardRange that describes the namespace that would ideally be spanned by a path. Paths that include this namespace will be preferred over those that do not. A sorted list of ShardRangeList. Update the ownshardrange with the up-to-date object stats from the broker. Note: this method does not persist the updated ownshardrange; callers should use broker.mergeshardranges if the updated stats need to be persisted. broker an instance of ContainerBroker. ownshardrange and instance of ShardRange. ownshardrange with up-to-date object_count and bytes_used. Bases: Daemon Daemon to sync syncable containers. This is done by scanning the local devices for container databases and checking for x-container-sync-to and x-container-sync-key metadata values. If they exist, newer rows since the last sync will trigger PUTs or DELETEs to the other container. The actual syncing is slightly more complicated to make use of the three (or number-of-replicas) main nodes for a container without each trying to do the exact same work but also without missing work if one node happens to be down. Two sync points are kept per container database. All rows between the two sync points trigger updates. Any rows newer than both sync points cause updates depending on the nodes position for the container (primary nodes do one third, etc. depending on the replica count of course). After a sync run, the first sync point is set to the newest ROWID known and the second sync point is set to newest ROWID for which all updates have been sent. An example may help. Assume replica count is 3 and perfectly matching ROWIDs starting at 1. First sync run, database has 6 rows: SyncPoint1 starts as -1. SyncPoint2 starts as -1. No rows between points, so no all updates rows. Six rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node 3. SyncPoint1 is set as 6 (the newest ROWID known). SyncPoint2 is left as -1 since no all updates rows were synced. Next sync run, database has 12 rows: SyncPoint1 starts as 6. SyncPoint2 starts as -1. The rows between -1 and 6 all trigger updates (most of which should short-circuit on the remote end as having already been done). Six more rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node SyncPoint1 is set as 12 (the newest ROWID known). SyncPoint2 is set as 6 (the newest all updates ROWID). In this way, under normal circumstances each node sends its share of updates each run and just sends a batch of older updates to ensure nothing was missed. conf The dict of configuration values from the [container-sync] section of the" }, { "data": "containerring If None, the <swiftdir>/container.ring.gz will be loaded. This is overridden by unit tests. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf The dict of configuration values from the [container-sync] section of the container-server.conf. Number of successful DELETEs triggered. Number of containers that had a failure of some type. Number of successful PUTs triggered. swift.common.ring.Ring for locating containers. Number of containers whose sync has been turned off, but are not yet cleared from the sync store. Per container stats. These are collected per container. puts - the number of puts that were done for the container deletes - the number of deletes that were fot the container bytes - the total number of bytes transferred per the container Checks the given path for a container database, determines if syncing is turned on for that database and, if so, sends any updates to the other container. path the path to a container db Sends the update the row indicates to the sync_to container. Update can be either delete or put. row The updated row in the local database triggering the sync update. sync_to The URL to the remote container. user_key The X-Container-Sync-Key to use when sending requests to the other container. broker The local container database broker. info The get_info result from the local container database broker. realm The realm from self.realms_conf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. realmkey The realm key from self.realmsconf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. True on success Number of containers with sync turned on that were successfully synced. Maximum amount of time to spend syncing a container before moving on to the next one. If a container sync hasnt finished in this time, itll just be resumed next scan. Path to the local device mount points. Minimum time between full scans. This is to keep the daemon from running wild on near empty systems. Logger to use for container-sync log lines. Indicates whether mount points should be verified as actual mount points (normally true, false for tests and SAIO). ContainerSyncCluster instance for validating sync-to values. Writes a report of the stats to the logger and resets the stats for the next report. Time of last stats report. Runs container sync scans until stopped. Runs a single container sync scan. ContainerSyncStore instance for iterating over synced containers Bases: Daemon Update container information in account listings. Report container info to an account server. node node dictionary from the account ring part partition the account is on container container name put_timestamp put timestamp delete_timestamp delete timestamp count object count in the container bytes bytes used in the container storagepolicyindex the policy index for the container Walk the path looking for container DBs and process them. path path to walk Get the account ring. Load it if it hasnt been yet. Get paths to all of the partitions on each drive to be processed. a list of paths Process a container, and update the information in the account. dbfile container DB to process Run the updater continuously. Run the updater once. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "misc.html#module-swift.common.container_sync_realms.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: DatabaseAuditor Audit containers. alias of ContainerBroker Pluggable Back-ends for Container Server Bases: DatabaseBroker Encapsulates working with a container database. Note that this may involve multiple on-disk DB files if the container becomes sharded: dbfile is the path to the legacy container DB name, i.e. <hash>.db. This file should exist for an initialised broker that has never been sharded, but will not exist once a container has been sharded. db_files is a list of existing db files for the broker. This list should have at least one entry for an initialised broker, and should have two entries while a broker is in SHARDING state. db_file is the path to whichever db is currently authoritative for the container. Depending on the containers state, this may not be the same as the dbfile argument given to init_(), unless forcedbfile is True in which case db_file is always equal to the dbfile argument given to init_(). pendingfile is always equal to db_file extended with .pending, i.e. <hash>.db.pending. Create a ContainerBroker instance. If the db doesnt exist, initialize the db file. device_path device path part partition number account account name string container container name string logger a logger instance epoch a timestamp to include in the db filename put_timestamp initial timestamp if broker needs to be initialized storagepolicyindex the storage policy index a tuple of (broker, initialized) where broker is an instance of swift.container.backend.ContainerBroker and initialized is True if the db file was initialized, False otherwise. Create the container_info table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. Also creates the container_stat view. conn DB connection object put_timestamp put timestamp storagepolicyindex storage policy index Create the object table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object Create policy_stat table. conn DB connection object storagepolicyindex the policy_index the container is being created with Create the shard_range table which is specific to the container DB. conn DB connection object Get the path to the primary db file for this broker. This is typically the db file for the most recent sharding epoch. However, if no db files exist on disk, or if forcedbfile was True when the broker was constructed, then the primary db file is the file passed to the broker constructor. A path to a db file; the file does not necessarily exist. Gets the cached list of valid db files that exist on disk for this broker. reloaddbfiles(). A list of paths to db files ordered by ascending epoch; the list may be empty. Mark an object deleted. name object name to be deleted timestamp timestamp when the object was marked as deleted storagepolicyindex the storage policy index for the object Check if container DB is empty. This method uses more stringent checks on object count than is_deleted(): this method checks that there are no objects in any policy; if the container is in the process of sharding then both fresh and retiring databases are checked to be empty; if a root container has shard ranges then they are checked to be empty. True if the database has no active objects, False otherwise Updates this brokers own shard range with the given epoch, sets its state to SHARDING and persists it in the" }, { "data": "epoch a Timestamp the brokers updated own shard range. Scans the container db for shard ranges. Scanning will start at the upper bound of the any existing_ranges that are given, otherwise at ShardRange.MIN. Scanning will stop when limit shard ranges have been found or when no more shard ranges can be found. In the latter case, the upper bound of the final shard range will be equal to the upper bound of the container namespace. This method does not modify the state of the db; callers are responsible for persisting any shard range data in the db. shard_size the size of each shard range limit the maximum number of shard points to be found; a negative value (default) implies no limit. existing_ranges an optional list of existing ShardRanges; if given, this list should be sorted in order of upper bounds; the scan for new shard ranges will start at the upper bound of the last existing ShardRange. minimumshardsize Minimum size of the final shard range. If this is greater than one then the final shard range may be extended to more than shard_size in order to avoid a further shard range with less minimumshardsize rows. a tuple; the first value in the tuple is a list of dicts each having keys {index, lower, upper, object_count} in order of ascending upper; the second value in the tuple is a boolean which is True if the last shard range has been found, False otherwise. Returns a list of all shard range data, including own shard range and deleted shard ranges. A list of dict representations of a ShardRange. Return a list of brokers for component dbs. The list has two entries while the db state is sharding: the first entry is a broker for the retiring db with skip_commits set to True; the second entry is a broker for the fresh db with skip_commits set to False. For any other db state the list has one entry. a list of ContainerBroker Returns the current state of on disk db files. Get global data for the container. dict with keys: account, container, created_at, puttimestamp, deletetimestamp, status, statuschangedat, objectcount, bytesused, reportedputtimestamp, reporteddeletetimestamp, reportedobjectcount, reportedbytesused, hash, id, xcontainersync_point1, xcontainersyncpoint2, and storagepolicy_index, db_state. Get the is_deleted status and info for the container. a tuple, in the form (info, is_deleted) info is a dict as returned by getinfo and isdeleted is a boolean. Get a list of objects which are in a storage policy different from the containers storage policy. start last reconciler sync point count maximum number of entries to get list of dicts with keys: name, created_at, size, contenttype, etag, storagepolicy_index Returns a list of persisted namespaces per input parameters. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored. reverse reverse the result order. states if specified, restricts the returned list to namespaces that have one of the given states; should be a list of" }, { "data": "fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. a list of Namespace objects. Returns a list of objects, including deleted objects, in all policies. Each object in the list is described by a dict with keys {name, createdat, size, contenttype, etag, deleted, storagepolicyindex}. limit maximum number of entries to get marker if set, objects with names less than or equal to this value will not be included in the list. end_marker if set, objects with names greater than or equal to this value will not be included in the list. include_deleted if True, include only deleted objects; if False, include only undeleted objects; otherwise (default), include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. a list of dicts, each describing an object. Returns a shard range representing this brokers own shard range. If no such range has been persisted in the brokers shard ranges table then a default shard range representing the entire namespace will be returned. The objectcount and bytesused of the returned shard range are not guaranteed to be up-to-date with the current object stats for this broker. Callers that require up-to-date stats should use the get_info method. no_default if True and the brokers own shard range is not found in the shard ranges table then None is returned, otherwise a default shard range is returned. an instance of ShardRange Get information about the DB required for replication. dict containing keys from getinfo plus maxrow and metadata count and metadata is the raw string. Returns a list of persisted shard ranges. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored, but other constraints are applied (e.g. exclude_others and include_deleted). reverse reverse the result order. include_deleted include items that have the delete marker set. states if specified, restricts the returned list to shard ranges that have one of the given states; should be a list of ints. include_own boolean that governs whether the row whose name matches the brokers path is included in the returned list. If True, that row is included unless it is excluded by other constraints (e.g. marker, end_marker, includes). If False, that row is not included. Default is False. exclude_others boolean that governs whether the rows whose names do not match the brokers path are included in the returned list. If True, those rows are not included, otherwise they are included. Default is False. fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. fill_gaps is ignored if includes is" }, { "data": "a list of instances of swift.common.utils.ShardRange. Get the aggregate object stats for all shard ranges in states ACTIVE, SHARDING or SHRINKING. a dict with keys {bytesused, objectcount} Returns sharding specific info from the brokers metadata. key if given the value stored under key in the sharding info will be returned. either a dict of sharding info or the value stored under key in that dict. Returns sharding specific info from the brokers metadata with timestamps. key if given the value stored under key in the sharding info will be returned. a dict of sharding info with their timestamps. This function tells if there is any shard range other than the brokers own shard range, that is not marked as deleted. A boolean value as described above. Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age. Returns True if this container is a root container, False otherwise. A root container is a container that is not a shard of another container. Get a list of objects sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix. limit maximum number of entries to get marker marker query end_marker end marker query prefix prefix query delimiter delimiter for query path if defined, will set the prefix and delimiter based on the path storagepolicyindex storage policy index for query reverse reverse the result order. include_deleted if True, include only deleted objects; if False (default), include only undeleted objects; otherwise, include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. transform_func an optional function that if given will be called for each object to get a transformed version of the object to include in the listing; should have same signature as transformrecord(); defaults to transformrecord(). all_policies if True, include objects for all storage policies ignoring any value given for storagepolicyindex allow_reserved exclude names with reserved-byte by default list of tuples of (name, createdat, size, contenttype, etag, deleted) Turn this db record dict into the format this service uses for pending pickles. Merge items into the object table. itemlist list of dictionaries of {name, createdat, size, content_type, etag, deleted, storagepolicyindex, ctype_timestamp, meta_timestamp} source if defined, update incoming_sync with the source Merge shard ranges into the shard range table. shard_ranges a shard range or a list of shard ranges; each shard range should be an instance of ShardRange or a dict representation of a shard range having SHARDRANGEKEYS. Creates an object in the DB with its metadata. name object name to be created timestamp timestamp of when the object was created size object size content_type object content-type etag object etag deleted if True, marks the object as deleted and sets the deleted_at timestamp to timestamp storagepolicyindex the storage policy index for the object ctypetimestamp timestamp of when contenttype was last updated meta_timestamp timestamp of when metadata was last updated Reloads the cached list of valid on disk db files for this broker. Removes object records in the given namespace range from the object table. Note that objects are removed regardless of their" }, { "data": "lower defines the lower bound of object names that will be removed; names greater than this value will be removed; names less than or equal to this value will not be removed. upper defines the upper bound of object names that will be removed; names less than or equal to this value will be removed; names greater than this value will not be removed. The empty string is interpreted as there being no upper bound. maxrow if specified only rows less than or equal to maxrow will be removed Update reported stats, available with containers get_info. puttimestamp puttimestamp to update deletetimestamp deletetimestamp to update objectcount objectcount to update bytesused bytesused to update Given a list of values each of which may be the name of a state, the number of a state, or an alias, return the set of state numbers described by the list. The following alias values are supported: listing maps to all states that are considered valid when listing objects; updating maps to all states that are considered valid for redirecting an object update; auditing maps to all states that are considered valid for a shard container that is updating its own shard range table from a root (this currently maps to all states except FOUND). states a list of values each of which may be the name of a state, the number of a state, or an alias a set of integer state numbers, or None if no states are given ValueError if any value in the given list is neither a valid state nor a valid alias Unlinks the brokers retiring DB file. True if the retiring DB was successfully unlinked, False otherwise. Creates and initializes a fresh DB file in preparation for sharding a retiring DB. The brokers own shard range must have an epoch timestamp for this method to succeed. True if the fresh DB was successfully created, False otherwise. Updates the brokers metadata stored under the given key prefixed with a sharding specific namespace. key metadata key in the sharding metadata namespace. value metadata value Update the containerstat policyindex and statuschangedat. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated, False otherwise. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated but has not yet completed sharding, False otherwise. Compares sharddata with existing and updates sharddata with any items of existing that take precedence over the corresponding item in shard_data. shard_data a dict representation of shard range that may be modified by this method. existing a dict representation of shard range. True if shard data has any item(s) that are considered to take precedence over the corresponding item in existing Compares new and existing shard ranges, updating the new shard ranges with any more recent state from the existing, and returns shard ranges sorted into those that need adding because they contain new or updated state and those that need deleting because their state has been superseded. newshardranges a list of dicts, each of which represents a shard range. existingshardranges a dict mapping shard range names to dicts representing a shard range. a tuple (toadd, todelete); to_add is a list of dicts, each of which represents a shard range that is to be added to the existing shard ranges; to_delete is a set of shard range names that are to be" }, { "data": "Compare the data and meta related timestamps of a new object item with the timestamps of an existing object record, and update the new item with data and/or meta related attributes from the existing record if their timestamps are newer. The multiple timestamps are encoded into a single string for storing in the created_at column of the objects db table. new_item A dict of object update attributes existing A dict of existing object attributes True if any attributes of the new item dict were found to be newer than the existing and therefore not updated, otherwise False implying that the updated item is equal to the existing. Bases: Replicator alias of ContainerBroker Cleanup non primary database from disk if needed. broker the broker for the database were replicating orig_info snapshot of the broker replication info dict taken before replication responses a list of boolean success values for each replication request to other nodes returns False if deletion of the database was attempted but unsuccessful, otherwise returns True. Ensure that reconciler databases are only cleaned up at the end of the replication run. Look for object rows for objects updates in the wrong storage policy in broker with a ROWID greater than the rowid given as point. broker the container broker with misplaced objects point the last verified reconcilersyncpoint the last successful enqueued rowid Add queue entries for rows in item_list to the local reconciler container database. container the name of the reconciler container item_list the list of rows to enqueue True if successfully enqueued Find a device in the ring that is on this node on which to place a partition. Preference is given to a device that is a primary location for the partition. If no such device is found then a local device with weight is chosen, and failing that any local device. part a partition a node entry from the ring Get a local instance of the reconciler container broker that is appropriate to enqueue the given timestamp. timestamp the timestamp of the row to be enqueued a local reconciler broker Ensure any items merged to reconciler containers during replication are pushed out to correct nodes and any reconciler containers that do not belong on this node are removed. Run a replication pass once. Bases: ReplicatorRpc If broker has ownshardrange with an epoch then filter out an ownshardrange without an epoch, and log a warning about it. shards a list of candidate ShardRanges to merge broker a ContainerBroker logger a logger source string to log as source of shards a list of ShardRanges to actually merge Bases: BaseStorageServer WSGI Controller for the container server. Handle HTTP DELETE request. Handle HTTP GET request. The body of the response to a successful GET request contains a listing of either objects or shard ranges. The exact content of the listing is determined by a combination of request headers and query string parameters, as follows: The type of the listing is determined by the X-Backend-Record-Type header. If this header has value shard then the response body will be a list of shard ranges; if this header has value auto, and the container state is sharding or sharded, then the listing will be a list of shard ranges; otherwise the response body will be a list of objects. Both shard range and object listings may be filtered according to the constraints described" }, { "data": "However, the X-Backend-Ignore-Shard-Name-Filter header may be used to override the application of the marker, end_marker, includes and reverse parameters to shard range listings. These parameters will be ignored if the header has the value sharded and the current db sharding state is also sharded. Note that this header does not override the states constraint on shard range listings. The order of both shard range and object listings may be reversed by using a reverse query string parameter with a value in swift.common.utils.TRUE_VALUES. Both shard range and object listings may be constrained to a name range by the marker and end_marker query string parameters. Object listings will only contain objects whose names are greater than any marker value and less than any end_marker value. Shard range listings will only contain shard ranges whose namespace is greater than or includes any marker value and is less than or includes any end_marker value. Shard range listings may also be constrained by an includes query string parameter. If this parameter is present the listing will only contain shard ranges whose namespace includes the value of the parameter; any marker or end_marker parameters are ignored The length of an object listing may be constrained by the limit parameter. Object listings may also be constrained by prefix, delimiter and path query string parameters. Shard range listings will include deleted shard ranges if and only if the X-Backend-Include-Deleted header value is one of swift.common.utils.TRUE_VALUES. Object listings never include deleted objects. Shard range listings may be constrained to include only shard ranges whose state is specified by a query string states parameter. If present, the states parameter should be a comma separated list of either the string or integer representation of STATES. Alias values may be used in a states parameter value. The listing alias will cause the listing to include all shard ranges in a state suitable for contributing to an object listing. The updating alias will cause the listing to include all shard ranges in a state suitable to accept an object update. If either of these aliases is used then the shard range listing will if necessary be extended with a synthesised filler range in order to satisfy the requested name range when insufficient actual shard ranges are found. Any filler shard range will cover the otherwise uncovered tail of the requested name range and will point back to the same container. The auditing alias will cause the listing to include all shard ranges in a state useful to the sharder while auditing a shard container. This alias will not cause a filler range to be added, but will cause the containers own shard range to be included in the listing. For now, auditing is only supported when X-Backend-Record-Shard-Format is full. Shard range listings can be simplified to include only Namespace only attributes (name, lower and upper) if the caller send the header X-Backend-Record-Shard-Format with value namespace as a hint that it would prefer namespaces. If this header doesnt exist or the value is full, the listings will default to include all attributes of shard ranges. But if params has includes/marker/end_marker then the response will be full shard ranges, regardless the header of X-Backend-Record-Shard-Format. The response header X-Backend-Record-Type will tell the user what type it gets back. Listings are not normally returned from a deleted container. However, the X-Backend-Override-Deleted header may be used with a value in swift.common.utils.TRUE_VALUES to force a shard range listing to be returned from a deleted container whose DB file still" }, { "data": "req an instance of swift.common.swob.Request an instance of swift.common.swob.Response Returns a list of objects in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Returns a list of persisted shard ranges or namespaces in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Handle HTTP HEAD request. Handle HTTP POST request. A POST request will update the containers put_timestamp, unless it has an X-Backend-No-Timestamp-Update header with a truthy value. req an instance of Request. Handle HTTP PUT request. Update or create container. Put object into container. Put shards into container. Handle HTTP REPLICATE request (json-encoded RPC calls for replication.) Handle HTTP UPDATE request (merge_items RPCs coming from the proxy.) Update the account server(s) with latest container info. req swob.Request object account account name container container name broker container DB broker object if all the account requests return a 404 error code, HTTPNotFound response object, if the account cannot be updated due to a malformed header, an HTTPBadRequest response object, otherwise None. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf Validate that the index supplied maps to a policy. policy index from request, or None if not present HTTPBadRequest if the supplied index is bogus ContainerSyncCluster instance for validating sync-to values. Perform mutation to container listing records that are common to all serialization formats, and returns it as a dict. Converts created time to iso timestamp. Replaces size with swift_bytes content type parameter. record object entry record modified record Return the shard_range database record as a dict, the keys will depend on the database fields provided in the record. record shard entry record, either ShardRange or Namespace. shardrecordfull boolean, when true the timestamp field is added as last_modified in iso format. dict suitable for listing responses paste.deploy app factory for creating WSGI container server apps Convert container info dict to headers. Split and validate path for a container. req a swob request a tuple of path parts as strings Split and validate path for an object. req a swob request a tuple of path parts as strings Bases: Daemon Move objects that are in the wrong storage policy. Validate source object will satisfy the misplaced object queue entry and move to destination. qpolicyindex the policy_index for the source object account the account name of the misplaced object container the container name of the misplaced object obj the name of the misplaced object q_ts the timestamp of the misplaced object path the full path of the misplaced object for logging containerpolicyindex the policy_index of the destination source_ts the timestamp of the source object sourceobjstatus the HTTP status source object request sourceobjinfo the HTTP headers of the source object request sourceobjiter the body iter of the source object request Issue a DELETE request against the destination to match the misplaced DELETE against the source. Dump stats to logger, noop when stats have been already been logged in the last minute. Issue a delete object request to the container for the misplaced object queue" }, { "data": "container the misplaced objects container obj the name of the misplaced object q_ts the timestamp of the misplaced object q_record the timestamp of the queue entry N.B. qts will normally be the same time as qrecord except when an object was manually re-enqued. Process an entry and remove from queue on success. q_container the queue container qentry the rawobj name from the q_container queue_item a parsed entry from the queue Main entry point for concurrent processing of misplaced objects. Iterate over all queue entries and delegate processing to spawned workers in the pool. Process a possibly misplaced object write request. Determine correct destination storage policy by checking with primary containers. Check source and destination, copying or deleting into destination and cleaning up the source as needed. This method wraps reconcileobject for exception handling. info a queue entry dict True to indicate the request is fully processed successfully, otherwise False. Override this to run forever Process every entry in the queue. Check if a given entry should be handled by this process. container the queue container queue_item an entry from the queue Update stats tracking for metric and emit log message. Issue a delete object request to the given storage_policy. account the account name container the container name obj the object name timestamp the timestamp of the object to delete policy_index the policy index to direct the request path the path to be used for logging Add an object to the container reconcilers queue. This will cause the container reconciler to move it from its current storage policy index to the correct storage policy index. container_ring container ring account the misplaced objects account container the misplaced objects container obj the misplaced object objpolicyindex the policy index where the misplaced object currently is obj_timestamp the misplaced objects X-Timestamp. We need this to ensure that the reconciler doesnt overwrite a newer object with an older one. op the method of the operation (DELETE or PUT) force over-write queue entries newer than obj_timestamp conn_timeout max time to wait for connection to container server response_timeout max time to wait for response from container server .misplaced_object container name, False on failure. Success means a majority of containers got the update. You have to squint to see it, but the general strategy is just: return the newest (of the recreated) return the oldest I tried cleaning it up for awhile, but settled on just writing a bunch of tests instead. Once you get an intuitive sense for the nuance here you can try and see theres a better way to spell the boolean logic but it all ends up looking sorta hairy. -1 if info is correct, 1 if remote_info is better Talk directly to the primary container servers to delete a particular object listing. Does not talk to object servers; use this only when a container entry does not actually have a corresponding object. Get the name of a container into which a misplaced object should be enqueued. The name is the objects last modified time rounded down to the nearest hour. objtimestamp a string representation of the objects createdat time from its container db row. a container name Compare remote_info to info and decide if the remote storage policy index should be used instead of ours. Translate a reconciler container listing entry to a dictionary containing the parts of the misplaced object queue" }, { "data": "obj_info an entry in an a container listing with the required keys: name, content_type, and hash a queue entry dict with the keys: qpolicyindex, account, container, obj, qop, qts, q_record, and path Bases: object Encapsulates metadata associated with the process of cleaving a retiring DB. This metadata includes: ref: The unique part of the key that is used when persisting a serialized CleavingContext as sysmeta in the DB. The unique part of the key is based off the DB id. This ensures that each context is associated with a specific DB file. The unique part of the key is included in the CleavingContext but should not be modified by any caller. cursor: the upper bound of the last shard range to have been cleaved from the retiring DB. max_row: the retiring DBs max row; this is updated to the value of the retiring DBs max_row every time a CleavingContext is loaded for that DB, and may change during the process of cleaving the DB. cleavetorow: the value of max_row at the moment when cleaving starts for the DB. When cleaving completes (i.e. the cleave cursor has reached the upper bound of the cleaving namespace), cleavetorow is compared to the current max_row: if the two values are not equal then rows have been added to the DB which may not have been cleaved, in which case the CleavingContext is reset and cleaving is re-started. lastcleaveto_row: the minimum DB row from which cleaving should select objects to cleave; this is initially set to None i.e. all rows should be cleaved. If the CleavingContext is reset then the lastcleaveto_row is set to the current value of cleavetorow, which in turn is set to the current value of max_row by a subsequent call to start. The repeated cleaving therefore only selects objects in rows greater than the lastcleaveto_row, rather than cleaving the whole DB again. ranges_done: the number of shard ranges that have been cleaved from the retiring DB. ranges_todo: the number of shard ranges that are yet to be cleaved from the retiring DB. Returns a CleavingContext tracking the cleaving progress of the given brokers DB. broker an instances of ContainerBroker An instance of CleavingContext. Returns all cleaving contexts stored in the brokers DB. broker an instance of ContainerBroker list of tuples of (CleavingContext, timestamp) Persists the serialized CleavingContext as sysmeta in the given brokers DB. broker an instances of ContainerBroker Bases: ContainerSharderConf, ContainerReplicator Shards containers. Run the container sharder until stopped. Run the container sharder once. Iterates through all object rows in srcshardrange in name order yielding them in lists of up to batch_size in length. All batches of rows that are not marked deleted are yielded before all batches of rows that are marked deleted. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. since_row include only object rows whose ROWID is greater than the given row id; by default all object rows are included. batch_size The maximum number of object rows to include in each yielded batch; defaults to cleaverowbatch_size. a generator of tuples of (list of rows, broker info dict) Iterates through all object rows in srcshardrange to place them in destination shard ranges provided by the destshardranges function. Yields tuples of (batch of object rows, destination shard range in which those object rows belong, broker" }, { "data": "If no destination shard range exists for a batch of object rows then tuples are yielded of (batch of object rows, None, broker info). This indicates to the caller that there are a non-zero number of object rows for which no destination shard range was found. Note that the same destination shard range may be referenced in more than one yielded tuple. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. destshardranges A function which should return a list of destination shard ranges sorted in the order defined by sort_key(). a generator of tuples of (object row list, shard range, broker info dict) where shard_range may be None. Bases: object Combines new and existing shard ranges based on most recent state. newshardranges a list of ShardRange instances. existingshardranges a list of ShardRange instances. a list of ShardRange instances. Update donor shard ranges to shrinking state and merge donors and acceptors to broker. broker A ContainerBroker. acceptor_ranges A list of ShardRange that are to be acceptors. donor_ranges A list of ShardRange that are to be donors; these will have their state and timestamp updated. timestamp timestamp to use when updating donor state Find sequences of shard ranges that could be compacted into a single acceptor shard range. This function does not modify shard ranges. broker A ContainerBroker. shrink_threshold the number of rows below which a shard may be considered for shrinking into another shard expansion_limit the maximum number of rows that an acceptor shard range should have after other shard ranges have been compacted into it max_shrinking the maximum number of shard ranges that should be compacted into each acceptor; -1 implies unlimited. max_expanding the maximum number of acceptors to be found (i.e. the maximum number of sequences to be returned); -1 implies unlimited. include_shrinking if True then existing compactible sequences are included in the results; default is False. A list of ShardRangeList each containing a sequence of neighbouring shard ranges that may be compacted; the final shard range in the list is the acceptor Find all pairs of overlapping ranges in the given list. shard_ranges A list of ShardRange excludeparentchild If True then overlapping pairs that have a parent-child relationship within the past time period time_period are excluded from the returned set. Default is False. time_period the specified past time period in seconds. Value of 0 means all time in the past. a set of tuples, each tuple containing ranges that overlap with each other. Returns a list of all continuous paths through the shard ranges. An individual path may not necessarily span the entire namespace, but it will span a continuous namespace without gaps. shard_ranges A list of ShardRange. A list of ShardRangeList. Find gaps in the shard ranges and pairs of shard range paths that lead to and from those gaps. For each gap a single pair of adjacent paths is selected. The concatenation of all selected paths and gaps will span the entire namespace with no overlaps. shard_ranges a list of instances of ShardRange. within_range an optional ShardRange that constrains the search space; the method will only return gaps within this range. The default is the entire namespace. A list of tuples of (startpath, gaprange, end_path) where start_path is a list of ShardRanges leading to the gap, gap_range is a ShardRange synthesized to describe the namespace gap, and end_path is a list of ShardRanges leading from the" }, { "data": "When gaps start or end at the namespace minimum or maximum bounds, startpath and endpath may be null paths that contain a single ShardRange covering either the minimum or maximum of the namespace. Transform the given sequences of shard ranges into a list of acceptors and a list of shrinking donors. For each given sequence the final ShardRange in the sequence (the acceptor) is expanded to accommodate the other ShardRanges in the sequence (the donors). The donors and acceptors are then merged into the broker. broker A ContainerBroker. sequences A list of ShardRangeList Sorts the given list of paths such that the most preferred path is the first item in the list. paths A list of ShardRangeList. shardrangeto_span An instance of ShardRange that describes the namespace that would ideally be spanned by a path. Paths that include this namespace will be preferred over those that do not. A sorted list of ShardRangeList. Update the ownshardrange with the up-to-date object stats from the broker. Note: this method does not persist the updated ownshardrange; callers should use broker.mergeshardranges if the updated stats need to be persisted. broker an instance of ContainerBroker. ownshardrange and instance of ShardRange. ownshardrange with up-to-date object_count and bytes_used. Bases: Daemon Daemon to sync syncable containers. This is done by scanning the local devices for container databases and checking for x-container-sync-to and x-container-sync-key metadata values. If they exist, newer rows since the last sync will trigger PUTs or DELETEs to the other container. The actual syncing is slightly more complicated to make use of the three (or number-of-replicas) main nodes for a container without each trying to do the exact same work but also without missing work if one node happens to be down. Two sync points are kept per container database. All rows between the two sync points trigger updates. Any rows newer than both sync points cause updates depending on the nodes position for the container (primary nodes do one third, etc. depending on the replica count of course). After a sync run, the first sync point is set to the newest ROWID known and the second sync point is set to newest ROWID for which all updates have been sent. An example may help. Assume replica count is 3 and perfectly matching ROWIDs starting at 1. First sync run, database has 6 rows: SyncPoint1 starts as -1. SyncPoint2 starts as -1. No rows between points, so no all updates rows. Six rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node 3. SyncPoint1 is set as 6 (the newest ROWID known). SyncPoint2 is left as -1 since no all updates rows were synced. Next sync run, database has 12 rows: SyncPoint1 starts as 6. SyncPoint2 starts as -1. The rows between -1 and 6 all trigger updates (most of which should short-circuit on the remote end as having already been done). Six more rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node SyncPoint1 is set as 12 (the newest ROWID known). SyncPoint2 is set as 6 (the newest all updates ROWID). In this way, under normal circumstances each node sends its share of updates each run and just sends a batch of older updates to ensure nothing was missed. conf The dict of configuration values from the [container-sync] section of the" }, { "data": "containerring If None, the <swiftdir>/container.ring.gz will be loaded. This is overridden by unit tests. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf The dict of configuration values from the [container-sync] section of the container-server.conf. Number of successful DELETEs triggered. Number of containers that had a failure of some type. Number of successful PUTs triggered. swift.common.ring.Ring for locating containers. Number of containers whose sync has been turned off, but are not yet cleared from the sync store. Per container stats. These are collected per container. puts - the number of puts that were done for the container deletes - the number of deletes that were fot the container bytes - the total number of bytes transferred per the container Checks the given path for a container database, determines if syncing is turned on for that database and, if so, sends any updates to the other container. path the path to a container db Sends the update the row indicates to the sync_to container. Update can be either delete or put. row The updated row in the local database triggering the sync update. sync_to The URL to the remote container. user_key The X-Container-Sync-Key to use when sending requests to the other container. broker The local container database broker. info The get_info result from the local container database broker. realm The realm from self.realms_conf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. realmkey The realm key from self.realmsconf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. True on success Number of containers with sync turned on that were successfully synced. Maximum amount of time to spend syncing a container before moving on to the next one. If a container sync hasnt finished in this time, itll just be resumed next scan. Path to the local device mount points. Minimum time between full scans. This is to keep the daemon from running wild on near empty systems. Logger to use for container-sync log lines. Indicates whether mount points should be verified as actual mount points (normally true, false for tests and SAIO). ContainerSyncCluster instance for validating sync-to values. Writes a report of the stats to the logger and resets the stats for the next report. Time of last stats report. Runs container sync scans until stopped. Runs a single container sync scan. ContainerSyncStore instance for iterating over synced containers Bases: Daemon Update container information in account listings. Report container info to an account server. node node dictionary from the account ring part partition the account is on container container name put_timestamp put timestamp delete_timestamp delete timestamp count object count in the container bytes bytes used in the container storagepolicyindex the policy index for the container Walk the path looking for container DBs and process them. path path to walk Get the account ring. Load it if it hasnt been yet. Get paths to all of the partitions on each drive to be processed. a list of paths Process a container, and update the information in the account. dbfile container DB to process Run the updater continuously. Run the updater once. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "misc.html#module-swift.common.storage_policy.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "misc.html#wsgi.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Extract the account ACLs from the given account_info, and return the ACLs. info a dict of the form returned by getaccountinfo None (no ACL system metadata is set), or a dict of the form:: {admin: [], read-write: [], read-only: []} ValueError on a syntactically invalid header Returns a cleaned ACL header value, validating that it meets the formatting requirements for standard Swift ACL strings. The ACL format is: ``` [item[,item...]] ``` Each item can be a group name to give access to or a referrer designation to grant or deny based on the HTTP Referer header. The referrer designation format is: ``` .r:[-]value ``` The .r can also be .ref, .referer, or .referrer; though it will be shortened to just .r for decreased character count usage. The value can be * to specify any referrer host is allowed access, a specific host name like www.example.com, or if it has a leading period . or leading *. it is a domain name specification, like .example.com or *.example.com. The leading minus sign - indicates referrer hosts that should be denied access. Referrer access is applied in the order they are specified. For example, .r:.example.com,.r:-thief.example.com would allow all hosts ending with .example.com except for the specific host thief.example.com. Example valid ACLs: ``` .r:* .r:*,.r:-.thief.com .r:*,.r:.example.com,.r:-thief.example.com .r:*,.r:-.thief.com,bobsaccount,suesaccount:sue bobsaccount,suesaccount:sue ``` Example invalid ACLs: ``` .r: .r:- ``` By default, allowing read access via .r will not allow listing objects in the container just retrieving objects from the container. To turn on listings, use the .rlistings directive. Also, .r designations arent allowed in headers whose names include the word write. ACLs that are messy will be cleaned up. Examples: | 0 | 1 | |:-|:-| | Original | Cleaned | | bob, sue | bob,sue | | bob , sue | bob,sue | | bob,,,sue | bob,sue | | .referrer : | .r: | | .ref:*.example.com | .r:.example.com | | .r:, .rlistings | .r:,.rlistings | Original Cleaned bob, sue bob,sue bob , sue bob,sue bob,,,sue bob,sue .referrer : * .r:* .ref:*.example.com .r:.example.com .r:*, .rlistings .r:*,.rlistings name The name of the header being cleaned, such as X-Container-Read or X-Container-Write. value The value of the header being cleaned. The value, cleaned of extraneous formatting. ValueError If the value does not meet the ACL formatting requirements; the error message will indicate why. Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific format_acl method, defaulting to version 1 for backward compatibility. kwargs keyword args appropriate for the selected ACL syntax version (see formataclv1() or formataclv2()) Returns a standard Swift ACL string for the given inputs. Caller is responsible for ensuring that :referrers: parameter is only given if the ACL is being generated for X-Container-Read. (X-Container-Write and the account ACL headers dont support referrers.) groups a list of groups (and/or members in most auth systems) to grant access referrers a list of referrer designations (without the leading .r:) header_name (optional) header name of the ACL were preparing, for clean_acl; if None, returned ACL wont be cleaned a Swift ACL string for use in X-Container-{Read,Write}, X-Account-Access-Control, etc. Returns a version-2 Swift ACL JSON string. Header-Name: {arbitrary:json,encoded:string} JSON will be forced ASCII (containing six-char uNNNN sequences rather than UTF-8; UTF-8 is valid JSON but clients vary in their support for UTF-8 headers), and without extraneous whitespace. Advantages over V1: forward compatibility (new keys dont cause parsing exceptions); Unicode support; no reserved words (you can have a user named .rlistings if you" }, { "data": "acl_dict dict of arbitrary data to put in the ACL; see specific auth systems such as tempauth for supported values a JSON string which encodes the ACL Compatibility wrapper to help migrate ACL syntax from version 1 to 2. Delegates to the appropriate version-specific parse_acl method, attempting to determine the version from the types of args/kwargs. args positional args for the selected ACL syntax version kwargs keyword args for the selected ACL syntax version (see parseaclv1() or parseaclv2()) the return value of parseaclv1() or parseaclv2() Parses a standard Swift ACL string into a referrers list and groups list. See clean_acl() for documentation of the standard Swift ACL format. acl_string The standard Swift ACL string to parse. A tuple of (referrers, groups) where referrers is a list of referrer designations (without the leading .r:) and groups is a list of groups to allow access. Parses a version-2 Swift ACL string and returns a dict of ACL info. data string containing the ACL data in JSON format A dict (possibly empty) containing ACL info, e.g.: {groups: [], referrers: []} None if data is None, is not valid JSON or does not parse as a dict empty dictionary if data is an empty string Returns True if the referrer should be allowed based on the referrer_acl list (as returned by parse_acl()). See clean_acl() for documentation of the standard Swift ACL format. referrer The value of the HTTP Referer header. referrer_acl The list of referrer designations as returned by parse_acl(). True if the referrer should be allowed; False if not. Monkey Patch httplib.HTTPResponse to buffer reads of headers. This can improve performance when making large numbers of small HTTP requests. This module also provides helper functions to make HTTP connections using BufferedHTTPResponse. Warning If you use this, be sure that the libraries you are using do not access the socket directly (xmlrpclib, Im looking at you :/), and instead make all calls through httplib. Bases: HTTPConnection HTTPConnection class that uses BufferedHTTPResponse Connect to the host and port specified in init. Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. Send a request header line to the server. For example: h.putheader(Accept, text/html) Send a request to the server. method specifies an HTTP request method, e.g. GET. url specifies the object being requested, e.g. /index.html. skip_host if True does not add automatically a Host: header skipacceptencoding if True does not add automatically an Accept-Encoding: header alias of BufferedHTTPResponse Bases: HTTPResponse HTTPResponse class that buffers reading of headers Flush and close the IO object. This method has no effect if the file is already closed. Terminate the socket with extreme prejudice. Closes the underlying socket regardless of whether or not anyone else has references to it. Use this when you are certain that nobody else you care about has a reference to this socket. Read and return up to n bytes. If the argument is omitted, None, or negative, reads and returns all data until EOF. If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached" }, { "data": "But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent. Returns an empty bytes object on EOF. Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment. Read and return a line from the stream. If size is specified, at most size bytes will be read. The line terminator is always bn for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized. Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to device device of the node to query partition partition on the device method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Helper function to create an HTTPConnection object. If ssl is set True, HTTPSConnection will be used. However, if ssl=False, BufferedHTTPConnection will be used, which is buffered for backend Swift services. ipaddr IPv4 address to connect to port port to connect to method HTTP method to request (GET, PUT, POST, etc.) path request path headers dictionary of headers query_string request query string ssl set True if SSL should be used (default: False) HTTPConnection object Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check that x-delete-after and x-delete-at headers have valid values. Values should be positive integers and correspond to a time greater than the request timestamp. If the x-delete-after header is found then its value is used to compute an x-delete-at value which takes precedence over any existing x-delete-at header. request the swob request object HTTPBadRequest in case of invalid values the swob request object Verify that the path to the device is a directory and is a lesser constraint that is enforced when a full mount_check isnt possible with, for instance, a VM using loopback or partitions. root base path where the dir is drive drive name to be checked full path to the device ValueError if drive fails to validate Validate the path given by root and drive is a valid existing directory. root base path where the devices are mounted drive drive name to be checked mount_check additionally require path is mounted full path to the device ValueError if drive fails to validate Helper function for checking if a string can be converted to a float. string string to be verified as a float True if the string can be converted to a float, False otherwise Check metadata sent in the request headers. This should only check that the metadata in the request given is valid. Checks against account/container overall metadata should be forwarded on to its respective server to be" }, { "data": "req request object target_type str: one of: object, container, or account: indicates which type the target storage for the metadata is HTTPBadRequest with bad metadata otherwise None Verify that the path to the device is a mount point and mounted. This allows us to fast fail on drives that have been unmounted because of issues, and also prevents us for accidentally filling up the root partition. root base path where the devices are mounted drive drive name to be checked full path to the device ValueError if drive fails to validate Validate that the header contains valid account or container name. req HTTP request object name header value to validate target_type which header is being validated (Account or Container) A properly encoded account name or container name HTTPPreconditionFailed if account header is not well formatted. Check to ensure that everything is alright about an object to be created. req HTTP request object object_name name of object to be created HTTPRequestEntityTooLarge the object is too large HTTPLengthRequired missing content-length header and not a chunked request HTTPBadRequest missing or bad content-type header, or bad metadata HTTPNotImplemented unsupported transfer-encoding header value Validate if a string is valid UTF-8 str or unicode and that it does not contain any reserved characters. string string to be validated internal boolean, allows reserved characters if True True if the string is valid utf-8 str or unicode and contains no null characters, False otherwise Parse SWIFTCONFFILE and reset module level global constraint attrs, populating OVERRIDECONSTRAINTS AND EFFECTIVECONSTRAINTS along the way. Checks if the requested version is valid. Currently Swift only supports v1 and v1.0. Helper function to extract a timestamp from requests that require one. request the swob request object a valid Timestamp instance HTTPBadRequest on missing or invalid X-Timestamp Bases: object Loads and parses the container-sync-realms.conf, occasionally checking the files mtime to see if it needs to be reloaded. Returns a list of clusters for the realm. Returns the endpoint for the cluster in the realm. Returns the hexdigest string of the HMAC-SHA1 (RFC 2104) for the information given. request_method HTTP method of the request. path The path to the resource (url-encoded). x_timestamp The X-Timestamp header value for the request. nonce A unique value for the request. realm_key Shared secret at the cluster operator level. user_key Shared secret at the users container level. hexdigest str of the HMAC-SHA1 for the request. Returns the key for the realm. Returns the key2 for the realm. Returns a list of realms. Forces a reload of the conf file. Returns a tuple of (digestalgorithm, hexencoded_digest) from a client-provided string of the form: ``` <hex-encoded digest> ``` or: ``` <algorithm>:<base64-encoded digest> ``` Note that hex-encoded strings must use one of sha1, sha256, or sha512. ValueError on parse failures Pulls out allowed_digests from the supplied conf. Then compares them with the list of supported and deprecated digests and returns whatever remain. When something is unsupported or deprecated itll log a warning. conf_digests iterable of allowed digests. If empty, defaults to DEFAULTALLOWEDDIGESTS. logger optional logger; if provided, use it issue deprecation warnings A set of allowed digests that are supported and a set of deprecated digests. ValueError, if there are no digests left to return. Returns the hexdigest string of the HMAC (see RFC 2104) for the request. request_method Request method to allow. path The path to the resource to allow access to. expires Unix timestamp as an int for when the URL expires. key HMAC shared" }, { "data": "digest constructor or the string name for the digest to use in calculating the HMAC Defaults to SHA1 ip_range The ip range from which the resource is allowed to be accessed. We need to put the ip_range as the first argument to hmac to avoid manipulation of the path due to newlines being valid in paths e.g. /v1/a/c/on127.0.0.1 hexdigest str of the HMAC for the request using the specified digest algorithm. Internal client library for making calls directly to the servers rather than through the proxy. Bases: ClientException Bases: ClientException Delete container directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers ClientException HTTP DELETE request failed Delete object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP DELETE request failed Get listings directly from the account server. node node dictionary from the ring part partition the account is on account account name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing a tuple of (response headers, a list of containers) The response headers will HeaderKeyDict. Get container listings directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name marker marker query limit query limit prefix prefix query delimiter delimiter for the query conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response endmarker endmarker query reverse reverse the returned listing headers headers to be included in the request extra_params a dict of extra parameters to be included in the request. It can be used to pass additional parameters, e.g, {states:updating} can be used with shard_range/namespace listing. It can also be used to pass the existing keyword args, like marker or limit, but if the same parameter appears twice in both keyword arg (not None) and extra_params, this function will raise TypeError. a tuple of (response headers, a list of objects) The response headers will be a HeaderKeyDict. Get object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response respchunksize if defined, chunk size of data to read. headers dict to be passed into HTTPConnection headers a tuple of (response headers, the objects contents) The response headers will be a HeaderKeyDict. ClientException HTTP GET request failed Get recon json directly from the storage server. node node dictionary from the ring recon_command recon string (post /recon/) conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers deserialized json response DirectClientReconException HTTP GET request failed Get suffix hashes directly from the object server. Note that unlike other direct_client functions, this one defaults to using the replication network to make" }, { "data": "node node dictionary from the ring part partition the container is on conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers dict of suffix hashes ClientException HTTP REPLICATE request failed Request container information directly from the container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Request object information directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name obj object name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers dict to be passed into HTTPConnection headers a dict containing the responses headers in a HeaderKeyDict ClientException HTTP HEAD request failed Make a POST request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request ClientException HTTP PUT request failed Direct update to object metadata on object server. node node dictionary from the ring part partition the container is on account account name container container name name object name headers headers to store as metadata conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response ClientException HTTP POST request failed Make a PUT request to a container server. node node dictionary from the ring part partition the container is on account account name container container name conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response headers additional headers to include in the request contents an iterable or string to send in request body (optional) content_length value to send as content-length header (optional) chunk_size chunk size of data to send (optional) ClientException HTTP PUT request failed Put object directly from the object server. node node dictionary from the ring part partition the container is on account account name container container name name object name contents an iterable or string to read object data from content_length value to send as content-length header etag etag of contents content_type value to send as content-type header headers additional headers to include in the request conn_timeout timeout in seconds for establishing the connection response_timeout timeout in seconds for getting the response chunk_size if defined, chunk size of data to send. etag from the server response ClientException HTTP PUT request failed Get the headers ready for a request. All requests should have a User-Agent string, but if one is passed in dont over-write it. Not all requests will need an X-Timestamp, but if one is passed in do not over-write it. headers dict or None, base for HTTP headers add_ts boolean, should be True for any unsafe HTTP request HeaderKeyDict based on headers and ready for the request Helper function to retry a given function a number of" }, { "data": "func callable to be called retries number of retries error_log logger for errors args arguments to send to func kwargs keyward arguments to send to func (if retries or error_log are sent, they will be deleted from kwargs before sending on to func) result of func ClientException all retries failed Bases: SwiftException Bases: SwiftException Bases: Timeout Bases: Timeout Bases: Exception Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: DiskFileError Bases: DiskFileError Bases: DiskFileNotExist Bases: DiskFileError Bases: SwiftException Bases: DiskFileDeleted Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: DiskFileError Bases: SwiftException Bases: RingBuilderError Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: DatabaseAuditorException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: ListingIterError Bases: ListingIterError Bases: MessageTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: LockTimeout Bases: OSError Bases: SwiftException Bases: Exception Bases: SwiftException Bases: SwiftException Bases: Exception Bases: LockTimeout Bases: Timeout Bases: SwiftException Bases: SwiftException Bases: RingBuilderError Bases: SwiftException Bases: SwiftException Bases: SwiftException Bases: Exception Bases: SwiftException Bases: EncryptionException Bases: object Wrapper for file object to compress object while reading. Can be used to wrap file objects passed to InternalClient.upload_object(). Used in testing of InternalClient. file_obj File object to wrap. compresslevel Compression level, defaults to 9. chunk_size Size of chunks read when iterating using object, defaults to 4096. Reads a chunk from the file object. Params are passed directly to the underlying file objects read(). Compressed chunk from file object. Sets the object to the state needed for the first read. Bases: object An internal client that uses a swift proxy app to make requests to Swift. This client will exponentially slow down for retries. conf_path Full path to proxy config. user_agent User agent to be sent to requests to Swift. requesttries Number of tries before InternalClient.makerequest() gives up. usereplicationnetwork Force the client to use the replication network over the cluster. global_conf a dict of options to update the loaded proxy config. Options in globalconf will override those in confpath except where the conf_path option is preceded by set. app Optionally provide a WSGI app for the internal client to use. Checks to see if a container exists. account The containers account. container Container to check. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. True if container exists, false otherwise. Creates an account. account Account to create. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Creates container. account The containers account. container Container to create. headers Defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an account. account Account to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes a container. account The containers account. container Container to delete. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Deletes an object. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). headers extra headers to send with request UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns (containercount, objectcount) for an account. account Account on which to get the information. acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets account" }, { "data": "account Account on which to get the metadata. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of account metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets container metadata. account The containers account. container Container to get metadata on. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). Returns dict of container metadata. Keys will be lowercase. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Gets an object. account The objects account. container The objects container. obj The object name. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of status for valid responses, defaults to (2,). params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. A 3-tuple (status, headers, iterator of object body) Gets object metadata. account The objects account. container The objects container. obj The object. metadata_prefix Used to filter values from the headers returned. Will strip that prefix from the keys in the dict returned. Defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). headers extra headers to send with request Dict of object metadata. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of containers dicts from an account. account Account on which to do the container listing. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of containers acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object lines from an uncompressed or compressed text object. Uncompress object as it is read if the objects name ends with .gz. account The objects account. container The objects container. obj The object. acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns an iterator of object dicts from a container. account The containers account. container Container to iterate objects on. marker Prefix of first desired item, defaults to . end_marker Last item returned will be less than this, defaults to . prefix Prefix of objects acceptable_statuses List of status for valid responses, defaults to (2, HTTPNOTFOUND). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Returns a swift path for a request quoting and utf-8 encoding the path parts as need be. account swift account container container, defaults to None obj object, defaults to None ValueError Is raised if obj is specified and container is" }, { "data": "Makes a request to Swift with retries. method HTTP method of request. path Path of request. headers Headers to be sent with request. acceptable_statuses List of acceptable statuses for request. body_file Body file to be passed along with request, defaults to None. params A dict of params to be set in request query string, defaults to None. Response object on success. UnexpectedResponse Exception raised when make_request() fails to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets account metadata. A call to this will add to the account metadata and not overwrite all of it with values in the metadata dict. To clear an account metadata value, pass an empty string as the value for the key in the metadata dict. account Account on which to get the metadata. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets container metadata. A call to this will add to the container metadata and not overwrite all of it with values in the metadata dict. To clear a container metadata value, pass an empty string as the value for the key in the metadata dict. account The containers account. container Container to set metadata on. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Sets an objects metadata. The objects metadata will be overwritten by the values in the metadata dict. account The objects account. container The objects container. obj The object. metadata Dict of metadata to set. metadata_prefix Prefix used to set metadata values in headers of requests, used to prefix keys in metadata when setting metadata, defaults to . acceptable_statuses List of status for valid responses, defaults to (2,). UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. fobj File object to read objects content from. account The objects account. container The objects container. obj The object. headers Headers to send with request, defaults to empty dict. acceptable_statuses List of acceptable statuses for request. params A dict of params to be set in request query string, defaults to None. UnexpectedResponse Exception raised when requests fail to get a response with an acceptable status Exception Exception is raised when code fails in an unexpected way. Bases: object Simple client that is used in bin/swift-dispersion-* and container sync Bases: Exception Exception raised on invalid responses to InternalClient.make_request(). message Exception message. resp The unexpected response. For usage with container sync For usage with container sync For usage with container sync Bases: object Main class for performing commands on groups of" }, { "data": "servers list of server names as strings alias for reload Find and return the decorated method named like cmd cmd the command to get, a string, if not found raises UnknownCommandError stop a server (no error if not running) kill child pids, optionally servicing accepted connections Get all publicly accessible commands a list of string tuples (cmd, help), the method names who are decorated as commands start a server interactively spawn server and return immediately start server and run one pass on supporting daemons graceful shutdown then restart on supporting servers seamlessly re-exec, then shutdown of old listen sockets on supporting servers stops then restarts server Find the named command and run it cmd the command name to run allow current requests to finish on supporting servers starts a server display status of tracked pids for server stops a server Bases: object Manage operations on a server or group of servers of similar type server name of server Get conf files for this server number if supplied will only lookup the nth server list of conf files Translate pidfile to a corresponding conffile pidfile a pidfile for this server, a string the conffile for this pidfile Translate conffile to a corresponding pidfile conffile an conffile for this server, a string the pidfile for this conffile Get running pids a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to terminate Generator, yields (pid_file, pids) Kill child pids, leaving server overseer to respawn them graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Kill running pids graceful if True, attempt SIGHUP on supporting servers seamless if True, attempt SIGUSR1 on supporting servers a dict mapping pids (ints) to pid_files (paths) Collect conf files and attempt to spawn the processes for this server Get pid files for this server number if supplied will only lookup the nth server list of pid files Send a signal to child pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Send a signal to pids for this server sig signal to send a dict mapping pids (ints) to pid_files (paths) Launch a subprocess for this server. conffile path to conffile to use as first arg once boolean, add once argument to command wait boolean, if true capture stdout with a pipe daemon boolean, if false ask server to log to console additional_args list of additional arguments to pass on the command line the pid of the spawned process Display status of server pids if not supplied pids will be populated automatically number if supplied will only lookup the nth server 1 if server is not running, 0 otherwise Send stop signals to pids for this server a dict mapping pids (ints) to pid_files (paths) wait on spawned procs to start Bases: Exception Decorator to declare which methods are accessible as commands, commands always return 1 or 0, where 0 should indicate success. func function to make public Formats server name as swift compatible server names E.g. swift-object-server servername server name swift compatible server name and its binary name Get the current set of all child PIDs for a PID. pid process id Send signal to process group : param pid: process id : param sig: signal to send Send signal to process and check process name : param pid: process id : param sig: signal to send : param name: name to ensure target process Try to increase resource limits of the OS. Move PYTHONEGGCACHE to /tmp Check whether the server is among swift servers or not, and also checks whether the servers binaries are installed or" }, { "data": "server name of the server True, when the server name is valid and its binaries are found. False, otherwise. Monitor a collection of server pids yielding back those pids that arent responding to signals. server_pids a dict, lists of pids [int,] keyed on Server objects Why our own memcache client? By Michael Barton python-memcached doesnt use consistent hashing, so adding or removing a memcache server from the pool invalidates a huge percentage of cached items. If you keep a pool of python-memcached client objects, each client object has its own connection to every memcached server, only one of which is ever in use. So you wind up with n * m open sockets and almost all of them idle. This client effectively has a pool for each server, so the number of backend connections is hopefully greatly reduced. python-memcache uses pickle to store things, and there was already a huge stink about Swift using pickles in memcache (http://osvdb.org/show/osvdb/86581). That seemed sort of unfair, since nova and keystone and everyone else use pickles for memcache too, but its hidden behind a standard library. But changing would be a security regression at this point. Also, pylibmc wouldnt work for us because it needs to use python sockets in order to play nice with eventlet. Lucid comes with memcached: v1.4.2. Protocol documentation for that version is at: http://github.com/memcached/memcached/blob/1.4.2/doc/protocol.txt Bases: object Helper class that encapsulates common parameters of a command. method the name of the MemcacheRing method that was called. key the memcached key. Bases: Pool Connection pool for Memcache Connections The server parameter can be a hostname, an IPv4 address, or an IPv6 address with an optional port. See swift.common.utils.parsesocketstring() for details. Generate a new pool item. In order for the pool to function, either this method must be overriden in a subclass or the pool must be constructed with the create argument. It accepts no arguments and returns a single instance of whatever thing the pool is supposed to contain. In general, create() is called whenever the pool exceeds its previous high-water mark of concurrently-checked-out-items. In other words, in a new pool with min_size of 0, the very first call to get() will result in a call to create(). If the first caller calls put() before some other caller calls get(), then the first item will be returned, and create() will not be called a second time. Return an item from the pool, when one is available. This may cause the calling greenthread to block. Bases: Exception Bases: MemcacheConnectionError Bases: Timeout Bases: object Simple, consistent-hashed memcache client. Decrements a key which has a numeric value by delta. Calls incr with -delta. key key delta amount to subtract to the value of key (or set the value to 0 if the key is not found) will be cast to an int time the time to live result of decrementing MemcacheConnectionError Deletes a key/value pair from memcache. key key to be deleted server_key key to use in determining which server in the ring is used Gets the object specified by key. It will also unserialize the object before returning if it is serialized in memcache with JSON. key key raiseonerror if True, propagate Timeouts and other errors. By default, errors are treated as cache misses. value of the key in memcache Gets multiple values from memcache for the given keys. keys keys for values to be retrieved from memcache server_key key to use in determining which server in the ring is used list of values Increments a key which has a numeric value by" }, { "data": "If the key cant be found, its added as delta or 0 if delta < 0. If passed a negative number, will use memcacheds decr. Returns the int stored in memcached Note: The data memcached stores as the result of incr/decr is an unsigned int. decrs that result in a number below 0 are stored as 0. key key delta amount to add to the value of key (or set as the value if the key is not found) will be cast to an int time the time to live result of incrementing MemcacheConnectionError Set a key/value pair in memcache key key value value serialize if True, value is serialized with JSON before sending to memcache time the time to live mincompresslen minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it. raiseonerror if True, propagate Timeouts and other errors. By default, errors are ignored. Sets multiple key/value pairs in memcache. mapping dictionary of keys and values to be set in memcache server_key key to use in determining which server in the ring is used serialize if True, value is serialized with JSON before sending to memcache. time the time to live minimum compress length, this parameter was added to keep the signature compatible with python-memcached interface. This implementation ignores it Build a MemcacheRing object from the given config. It will also use the passed in logger. conf a dict, the config options logger a logger Sanitize a timeout value to use an absolute expiration time if the delta is greater than 30 days (in seconds). Note that the memcached server translates negative values to mean a delta of 30 days in seconds (and 1 additional second), client beware. Returns the set of registered sensitive headers. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns the set of registered sensitive query parameters. Used by swift.common.middleware.proxy_logging to perform redactions prior to logging. Returns information about the swift cluster that has been previously registered with the registerswiftinfo call. admin boolean value, if True will additionally return an admin section with information previously registered as admin info. disallowed_sections list of section names to be withheld from the information returned. dictionary of information about the swift cluster. Register a header as being sensitive. Sensitive headers are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. header The (case-insensitive) header name which, if present, may contain sensitive information. Examples include X-Auth-Token and (if s3api is enabled) Authorization. Limited to ASCII characters. Register a query parameter as being sensitive. Sensitive query parameters are automatically redacted when logging. See the revealsensitiveprefix option in the proxy-server sample config for more information. query_param The (case-sensitive) query parameter name which, if present, may contain sensitive information. Examples include tempurlsignature and (if s3api is enabled) X-Amz-Signature. Limited to ASCII characters. Registers information about the swift cluster to be retrieved with calls to getswiftinfo. in the disallowed_sections to remove unwanted keys from /info. name string, the section name to place the information under. admin boolean, if True, information will be registered to an admin section which can optionally be withheld when requesting the information. kwargs key value arguments representing the information to be added. ValueError if name or any of the keys in kwargs has . in it Miscellaneous utility functions for use in generating responses. Why not swift.common.utils, you ask? Because this way we can import things from swob in here without creating circular imports. Bases: object Iterable that returns the object contents for a large" }, { "data": "req original request object app WSGI application from which segments will come listing_iter iterable yielding the object segments to fetch, along with the byte sub-ranges to fetch. Each yielded item should be a dict with the following keys: path or raw_data, first-byte, last-byte, hash (optional), bytes (optional). If hash is None, no MD5 verification will be done. If bytes is None, no length verification will be done. If first-byte and last-byte are None, then the entire object will be fetched. maxgettime maximum permitted duration of a GET request (seconds) logger logger object swift_source value of swift.source in subrequest environ (just for logging) ua_suffix string to append to user-agent. name name of manifest (used in logging only) responsebodylength optional response body length for the response being sent to the client. swob.Response will only respond with a 206 status in certain cases; one of those is if the body iterator responds to .appiterrange(). However, this object (or really, its listing iter) is smart enough to handle the range stuff internally, so we just no-op this out for swob. This method assumes that iter(self) yields all the data bytes that go into the response, but none of the MIME stuff. For example, if the response will contain three MIME docs with data abcd, efgh, and ijkl, then iter(self) will give out the bytes abcdefghijkl. This method inserts the MIME stuff around the data bytes. Called when the client disconnect. Ensure that the connection to the backend server is closed. Start fetching object data to ensure that the first segment (if any) is valid. This is to catch cases like first segment is missing or first segments etag doesnt match manifest. Note: this does not validate that you have any segments. A zero-segment large object is not erroneous; it is just empty. Validate that the value of path-like header is well formatted. We assume the caller ensures that specific header is present in req.headers. req HTTP request object name header name length length of path segment check error_msg error message for client A tuple with path parts according to length HTTPPreconditionFailed if header value is not well formatted. Will copy desired subset of headers from fromr to tor. from_r a swob Request or Response to_r a swob Request or Response condition a function that will be passed the header key as a single argument and should return True if the header is to be copied. Returns the full X-Object-Sysmeta-Container-Update-Override-* header key. key the key you want to override in the container update the full header key Get the ip address and port that should be used for the given node. The normal ip address and port are returned unless the node or headers indicate that the replication ip address and port should be used. If the headers dict has an item with key x-backend-use-replication-network and a truthy value then the replication ip address and port are returned. Otherwise if the node dict has an item with key use_replication and truthy value then the replication ip address and port are returned. Otherwise the normal ip address and port are returned. node a dict describing a node headers a dict of headers a tuple of (ip address, port) Utility function to split and validate the request path and storage policy. The storage policy index is extracted from the headers of the request and converted to a StoragePolicy instance. The remaining args are passed through to" }, { "data": "a list, result of splitandvalidate_path() with the BaseStoragePolicy instance appended on the end HTTPServiceUnavailable if the path is invalid or no policy exists with the extracted policy_index. Returns the Object Transient System Metadata header for key. The Object Transient System Metadata namespace will be persisted by backend object servers. These headers are treated in the same way as object user metadata i.e. all headers in this namespace will be replaced on every POST request. key metadata key the entire object transient system metadata header for key Get a parameter from an HTTP request ensuring proper handling UTF-8 encoding. req request object name parameter name default result to return if the parameter is not found HTTP request parameter value, as a native string (in py2, as UTF-8 encoded str, not unicode object) HTTPBadRequest if param not valid UTF-8 byte sequence Generate a valid reserved name that joins the component parts. a string Returns the prefix for system metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types system metadata headers Returns the prefix for user metadata headers for given server type. This prefix defines the namespace for headers that will be persisted by backend servers. server_type type of backend server i.e. [account|container|object] prefix string for server types user metadata headers Any non-range GET or HEAD request for a SLO object may include a part-number parameter in query string. If the passed in request includes a part-number parameter it will be parsed into a valid integer and returned. If the passed in request does not include a part-number param we will return None. If the part-number parameter is invalid for the given request we will raise the appropriate HTTP exception req the request object validated part-number value or None HTTPBadRequest if request or part-number param is not valid Takes a successful object-GET HTTP response and turns it into an iterator of (first-byte, last-byte, length, headers, body-file) 5-tuples. The response must either be a 200 or a 206; if you feed in a 204 or something similar, this probably wont work. response HTTP response, like from bufferedhttp.http_connect(), not a swob.Response. Helper function to check if a request has either the headers x-backend-open-expired or x-backend-replication for the backend to access expired objects. request request object Tests if a header key starts with and is longer than the prefix for object transient system metadata. key header key True if the key satisfies the test, False otherwise Helper function to check if a request with the header x-open-expired can access an object that has not yet been reaped by the object-expirer based on the allowopenexpired global config. app the application instance req request object Tests if a header key starts with and is longer than the system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Tests if a header key starts with and is longer than the user or system metadata prefix for given server type. server_type type of backend server i.e. [account|container|object] key header key True if the key satisfies the test, False otherwise Determine if replication network should be used. headers a dict of headers the value of the x-backend-use-replication-network item from headers. If no headers are given or the item is not found then False is returned. Tests if a header key starts with and is longer than the user metadata prefix for given server type. server_type type of backend server" }, { "data": "[account|container|object] key header key True if the key satisfies the test, False otherwise Removes items from a dict whose keys satisfy the given condition. headers a dict of headers condition a function that will be passed the header key as a single argument and should return True if the header is to be removed. a dict, possibly empty, of headers that have been removed Helper function to resolve an alternative etag value that may be stored in metadata under an alternate name. The value of the requests X-Backend-Etag-Is-At header (if it exists) is a comma separated list of alternate names in the metadata at which an alternate etag value may be found. This list is processed in order until an alternate etag is found. The left most value in X-Backend-Etag-Is-At will have been set by the left most middleware, or if no middleware, by ECObjectController, if an EC policy is in use. The left most middleware is assumed to be the authority on what the etag value of the object content is. The resolver will work from left to right in the list until it finds a value that is a name in the given metadata. So the left most wins, IF it exists in the metadata. By way of example, assume the encrypter middleware is installed. If an object is not encrypted then the resolver will not find the encrypter middlewares alternate etag sysmeta (X-Object-Sysmeta-Crypto-Etag) but will then find the EC alternate etag (if EC policy). But if the object is encrypted then X-Object-Sysmeta-Crypto-Etag is found and used, which is correct because it should be preferred over X-Object-Sysmeta-Ec-Etag. req a swob Request metadata a dict containing object metadata an alternate etag value if any is found, otherwise None Helper function to remove Range header from request if metadata matching the X-Backend-Ignore-Range-If-Metadata-Present header is found. req a swob Request metadata dictionary of object metadata Utility function to split and validate the request path. result of split_path() if everythings okay, as native strings HTTPBadRequest if somethings not okay Separate a valid reserved name into the component parts. a list of strings Removes the object transient system metadata prefix from the start of a header key. key header key stripped header key Removes the system metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Removes the user metadata prefix for a given server type from the start of a header key. server_type type of backend server i.e. [account|container|object] key header key stripped header key Helper function to update an X-Backend-Etag-Is-At header whose value is a list of alternative header names at which the actual object etag may be found. This informs the object server where to look for the actual object etag when processing conditional requests. Since the proxy server and/or middleware may set alternative etag header names, the value of X-Backend-Etag-Is-At is a comma separated list which the object server inspects in order until it finds an etag value. req a swob Request name name of a sysmeta where alternative etag may be found Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present header whose value is a list of header names which, if any are present on an object, mean the object server should respond with a 200 instead of a 206 or 416. req a swob Request name name of a header which, if found, indicates the proxy will want the whole object Validate internal account name. HTTPBadRequest Validate internal account and container names. HTTPBadRequest Validate internal account, container and object" }, { "data": "HTTPBadRequest Get list of parameters from an HTTP request, validating the encoding of each parameter. req request object names parameter names a dict mapping parameter names to values for each name that appears in the request parameters HTTPBadRequest if any parameter value is not a valid UTF-8 byte sequence Implementation of WSGI Request and Response objects. This library has a very similar API to Webob. It wraps WSGI request environments and response values into objects that are more friendly to interact with. Why Swob and not just use WebOb? By Michael Barton We used webob for years. The main problem was that the interface wasnt stable. For a while, each of our several test suites required a slightly different version of webob to run, and none of them worked with the then-current version. It was a huge headache, so we just scrapped it. This is kind of a ton of code, but its also been a huge relief to not have to scramble to add a bunch of code branches all over the place to keep Swift working every time webob decides some interface needs to change. Bases: object Wraps a Requests Accept header as a friendly object. headerval value of the header as a str Returns the item from options that best matches the accept header. Returns None if no available options are acceptable to the client. options a list of content-types the server can respond with ValueError if the header is malformed Bases: Response, Exception Bases: MutableMapping A dict-like object that proxies requests to a wsgi environ, rewriting header keys to environ keys. For example, headers[Content-Range] sets and gets the value of headers.environ[HTTPCONTENTRANGE] Bases: object Wraps a Requests If-[None-]Match header as a friendly object. headerval value of the header as a str Bases: object Wraps a Requests Range header as a friendly object. After initialization, range.ranges is populated with a list of (start, end) tuples denoting the requested ranges. If there were any syntactically-invalid byte-range-spec values, the constructor will raise a ValueError, per the relevant RFC: The recipient of a byte-range-set that includes one or more syntactically invalid byte-range-spec values MUST ignore the header field that includes that byte-range-set. According to the RFC 2616 specification, the following cases will be all considered as syntactically invalid, thus, a ValueError is thrown so that the range header will be ignored. If the range value contains at least one of the following cases, the entire range is considered invalid, ValueError will be thrown so that the header will be ignored. value not starts with bytes= range value start is greater than the end, eg. bytes=5-3 range does not have start or end, eg. bytes=- range does not have hyphen, eg. bytes=45 range value is non numeric any combination of the above Every syntactically valid range will be added into the ranges list even when some of the ranges may not be satisfied by underlying content. headerval value of the header as a str This method is used to return multiple ranges for a given length which should represent the length of the underlying content. The constructor method init made sure that any range in ranges list is syntactically valid. So if length is None or size of the ranges is zero, then the Range header should be ignored which will eventually make the response to be 200. If an empty list is returned by this method, it indicates that there are unsatisfiable ranges found in the Range header, 416 will be" }, { "data": "if a returned list has at least one element, the list indicates that there is at least one range valid and the server should serve the request with a 206 status code. The start value of each range represents the starting position in the content, the end value represents the ending position. This method purposely adds 1 to the end number because the spec defines the Range to be inclusive. The Range spec can be found at the following link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.1 length length of the underlying content Bases: object WSGI Request object. Retrieve and set the accept property in the WSGI environ, as a Accept object Get and set the swob.ACL property in the WSGI environment Create a new request object with the given parameters, and an environment otherwise filled in with non-surprising default values. path encoded, parsed, and unquoted into PATH_INFO environ WSGI environ dictionary headers HTTP headers body stuffed in a WsgiBytesIO and hung on wsgi.input kwargs any environ key with an property setter Get and set the request body str Get and set the wsgi.input property in the WSGI environment Calls the application with this requests environment. Returns the status, headers, and app_iter for the response as a tuple. application the WSGI application to call Retrieve and set the content-length header as an int Makes a copy of the request, converting it to a GET. Similar to timestamp, but the X-Timestamp header will be set if not present. HTTPBadRequest if X-Timestamp is already set but not a valid Timestamp the requests X-Timestamp header, as a Timestamp Calls the application with this requests environment. Returns a Response object that wraps up the applications result. application the WSGI application to call Get and set the HTTP_HOST property in the WSGI environment Get url for request/response up to path Retrieve and set the if-match property in the WSGI environ, as a Match object Retrieve and set the if-modified-since header as a datetime, set it with a datetime, int, or str Retrieve and set the if-none-match property in the WSGI environ, as a Match object Retrieve and set the if-unmodified-since header as a datetime, set it with a datetime, int, or str Properly determine the message length for this request. It will return an integer if the headers explicitly contain the message length, or None if the headers dont contain a length. The ValueError exception will be raised if the headers are invalid. ValueError if either transfer-encoding or content-length headers have bad values AttributeError if the last value of the transfer-encoding header is not chunked Get and set the REQUEST_METHOD property in the WSGI environment Provides QUERY_STRING parameters as a dictionary Provides the full path of the request, excluding the QUERY_STRING Get and set the PATH_INFO property in the WSGI environment Takes one path portion (delineated by slashes) from the pathinfo, and appends it to the scriptname. Returns the path segment. The path of the request, without host but with query string. Get and set the QUERY_STRING property in the WSGI environment Retrieve and set the range property in the WSGI environ, as a Range object Get and set the HTTP_REFERER property in the WSGI environment Get and set the HTTP_REFERER property in the WSGI environment Get and set the REMOTE_ADDR property in the WSGI environment Get and set the REMOTE_USER property in the WSGI environment Get and set the SCRIPT_NAME property in the WSGI environment Validate and split the Requests" }, { "data": "Examples: ``` ['a'] = split_path('/a') ['a', None] = split_path('/a', 1, 2) ['a', 'c'] = split_path('/a/c', 1, 2) ['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True) ``` minsegs Minimum number of segments to be extracted maxsegs Maximum number of segments to be extracted restwithlast If True, trailing data will be returned as part of last segment. If False, and there is trailing data, raises ValueError. list of segments with a length of maxsegs (non-existent segments will return as None) ValueError if given an invalid path Provides QUERY_STRING parameters as a dictionary Provides the (native string) account/container/object path, sans API version. This can be useful when constructing a path to send to a backend server, as that path will need everything after the /v1. Provides HTTPXTIMESTAMP as a Timestamp Provides the full url of the request Get and set the HTTPUSERAGENT property in the WSGI environment Bases: object WSGI Response object. Respond to the WSGI request. Warning This will translate any relative Location header value to an absolute URL using the WSGI environments HOST_URL as a prefix, as RFC 2616 specifies. However, it is quite common to use relative redirects, especially when it is difficult to know the exact HOST_URL the browser would have used when behind several CNAMEs, CDN services, etc. All modern browsers support relative redirects. To skip over RFC enforcement of the Location header value, you may set env['swift.leaverelativelocation'] = True in the WSGI environment. Attempt to construct an absolute location. Retrieve and set the accept-ranges header Retrieve and set the response app_iter Retrieve and set the Response body str Retrieve and set the response charset The conditional_etag keyword argument for Response will allow the conditional match value of a If-Match request to be compared to a non-standard value. This is available for Storage Policies that do not store the client object data verbatim on the storage nodes, but still need support conditional requests. Its most effectively used with X-Backend-Etag-Is-At which would define the additional Metadata key(s) where the original ETag of the clear-form client request data may be found. Retrieve and set the content-length header as an int Retrieve and set the content-range header Retrieve and set the response Content-Type header Retrieve and set the response Etag header You may call this once you have set the content_length to the whole object length and body or appiter to reset the contentlength properties on the request. It is ok to not call this method, the conditional response will be maintained for you when you call the response. Get url for request/response up to path Retrieve and set the last-modified header as a datetime, set it with a datetime, int, or str Retrieve and set the location header Retrieve and set the Response status, e.g. 200 OK Construct a suitable value for WWW-Authenticate response header If we have a request and a valid-looking path, the realm is the account; otherwise we set it to unknown. Bases: object A dict-like object that returns HTTPException subclasses/factory functions where the given key is the status code. Bases: BytesIO This class adds support for the additional wsgi.input methods defined on eventlet.wsgi.Input to the BytesIO class which would otherwise be a fine stand-in for the file-like object in the WSGI environment. A decorator for translating functions which take a swob Request object and return a Response object into WSGI callables. Also catches any raised HTTPExceptions and treats them as a returned Response. Miscellaneous utility functions for use with Swift. Regular expression to match form attributes. Bases: ClosingIterator Like itertools.chain, but with a close method that will attempt to invoke its sub-iterators close methods, if" }, { "data": "Bases: object Wrap another iterator and close it, if possible, on completion/exception. If other closeable objects are given then they will also be closed when this iterator is closed. This is particularly useful for ensuring a generator properly closes its resources, even if the generator was never started. This class may be subclassed to override the behavior of getnext_item. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: ClosingIterator A closing iterator that yields the result of function as it is applied to each item of iterable. Note that while this behaves similarly to the built-in map function, other_closeables does not have the same semantic as the iterables argument of map. function a function that will be called with each item of iterable before yielding its result. iterable iterator to wrap. other_closeables other resources to attempt to close. Bases: GreenPool GreenPool subclassed to kill its coros when it gets gced Bases: ClosingIterator Wrapper to make a deliberate periodic call to sleep() while iterating over wrapped iterator, providing an opportunity to switch greenthreads. This is for fairness; if the network is outpacing the CPU, well always be able to read and write data without encountering an EWOULDBLOCK, and so eventlet will not switch greenthreads on its own. We do it manually so that clients dont starve. The number 5 here was chosen by making stuff up. Its not every single chunk, but its not too big either, so it seemed like it would probably be an okay choice. Note that we may trampoline to other greenthreads more often than once every 5 chunks, depending on how blocking our network IO is; the explicit sleep here simply provides a lower bound on the rate of trampolining. iterable iterator to wrap. period number of items yielded from this iterator between calls to sleep(); a negative value or 0 mean that cooperative sleep will be disabled. Bases: object A container that contains everything. If e is an instance of Everything, then x in e is true for all x. Bases: object Runs jobs in a pool of green threads, and the results can be retrieved by using this object as an iterator. This is very similar in principle to eventlet.GreenPile, except it returns results as they become available rather than in the order they were launched. Correlating results with jobs (if necessary) is left to the caller. Spawn a job in a green thread on the pile. Wait timeout seconds for any results to come in. timeout seconds to wait for results list of results accrued in that time Wait up to timeout seconds for first result to come in. timeout seconds to wait for results first item to come back, or None Bases: Timeout Bases: object Wrap an iterator to ensure that only one greenthread is inside its next() method at a time. This is useful if an iterators next() method may perform network IO, as that may trigger a greenthread context switch (aka trampoline), which can give another greenthread a chance to call next(). At that point, you get an error like ValueError: generator already executing. By wrapping calls to next() with a mutex, we avoid that error. Bases: object File-like object that counts bytes read. To be swapped in for wsgi.input for accounting purposes. Pass read request to the underlying file-like object and add bytes read to total. Pass readline request to the underlying file-like object and add bytes read to total. Bases: ValueError Bases: object Decorator for size/time bound memoization that evicts the least recently used" }, { "data": "Bases: object A Namespace encapsulates parameters that define a range of the object namespace. name the name of the Namespace; this SHOULD take the form of a path to a container i.e. <accountname>/<containername>. lower the lower bound of object names contained in the namespace; the lower bound is not included in the namespace. upper the upper bound of object names contained in the namespace; the upper bound is included in the namespace. Bases: NamespaceOuterBound Bases: NamespaceOuterBound Returns True if this namespace includes the entire namespace, False otherwise. Expands the bounds as necessary to match the minimum and maximum bounds of the given donors. donors A list of Namespace True if the bounds have been modified, False otherwise. Returns True if this namespace includes the whole of the other namespace, False otherwise. other an instance of Namespace Returns True if this namespace overlaps with the other namespace. other an instance of Namespace Bases: object A custom singleton type to be subclassed for the outer bounds of Namespaces. Bases: tuple Alias for field number 0 Alias for field number 1 Alias for field number 2 Bases: object Wrap an iterator to only yield elements at a rate of N per second. iterable iterable to wrap elementspersecond the rate at which to yield elements limit_after rate limiting kicks in only after yielding this many elements; default is 0 (rate limit immediately) Bases: object Encapsulates the components of a shard name. Instances of this class would typically be constructed via the create() or parse() class methods. Shard names have the form: <account>/<rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> Note: some instances of ShardRange have names that will NOT parse as a ShardName; e.g. a root containers own shard range will have a name format of <account>/<root_container> which will raise ValueError if passed to parse. Create an instance of ShardName. account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of account, rootcontainer, parentcontainer and timestamp. an instance of ShardName. ValueError if any argument is None Calculates the hash of a container name. container_name name to be hashed. the hexdigest of the md5 hash of container_name. ValueError if container_name is None. Parse name to an instance of ShardName. name a shard name which should have the form: <account>/ <rootcontainer>-<parentcontainer_hash>-<timestamp>-<index> an instance of ShardName. ValueError if name is not a valid shard name. Bases: Namespace A ShardRange encapsulates sharding state related to a container including lower and upper bounds that define the object namespace for which the container is responsible. Shard ranges may be persisted in a container database. Timestamps associated with subsets of the shard range attributes are used to resolve conflicts when a shard range needs to be merged with an existing shard range record and the most recent version of an attribute should be persisted. name the name of the shard range; this MUST take the form of a path to a container i.e. <accountname>/<containername>. timestamp a timestamp that represents the time at which the shard ranges lower, upper or deleted attributes were last modified. lower the lower bound of object names contained in the shard range; the lower bound is not included in the shard range" }, { "data": "upper the upper bound of object names contained in the shard range; the upper bound is included in the shard range namespace. object_count the number of objects in the shard range; defaults to zero. bytes_used the number of bytes in the shard range; defaults to zero. meta_timestamp a timestamp that represents the time at which the shard ranges objectcount and bytesused were last updated; defaults to the value of timestamp. deleted a boolean; if True the shard range is considered to be deleted. state the state; must be one of ShardRange.STATES; defaults to CREATED. state_timestamp a timestamp that represents the time at which state was forced to its current value; defaults to the value of timestamp. This timestamp is typically not updated with every change of state because in general conflicts in state attributes are resolved by choosing the larger state value. However, when this rule does not apply, for example when changing state from SHARDED to ACTIVE, the state_timestamp may be advanced so that the new state value is preferred over any older state value. epoch optional epoch timestamp which represents the time at which sharding was enabled for a container. reported optional indicator that this shard and its stats have been reported to the root container. tombstones the number of tombstones in the shard range; defaults to -1 to indicate that the value is unknown. Creates a copy of the ShardRange. timestamp (optional) If given, the returned ShardRange will have all of its timestamps set to this value. Otherwise the returned ShardRange will have the original timestamps. an instance of ShardRange Find this shard ranges ancestor ranges in the given shard_ranges. This method makes a best-effort attempt to identify this shard ranges parent shard range, the parents parent, etc., up to and including the root shard range. It is only possible to directly identify the parent of a particular shard range, so the search is recursive; if any member of the ancestry is not found then the search ends and older ancestors that may be in the list are not identified. The root shard range, however, will always be identified if it is present in the list. For example, given a list that contains parent, grandparent, great-great-grandparent and root shard ranges, but is missing the great-grandparent shard range, only the parent, grand-parent and root shard ranges will be identified. shard_ranges a list of instances of ShardRange a list of instances of ShardRange containing items in the given shard_ranges that can be identified as ancestors of this shard range. The list may not be complete if there are gaps in the ancestry, but is guaranteed to contain at least the parent and root shard ranges if they are present. Find this shard ranges root shard range in the given shard_ranges. shard_ranges a list of instances of ShardRange this shard ranges root shard range if it is found in the list, otherwise None. Return an instance constructed using the given dict of params. This method is deliberately less flexible than the class init() method and requires all of the init() args to be given in the dict of params. params a dict of parameters an instance of this class Increment the object stats metadata by the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer ValueError if objectcount or bytesused cannot be cast to an int. Test if this shard range is a child of another shard range. The parent-child relationship is inferred from the names of the shard" }, { "data": "This method is limited to work only within the scope of the same user-facing account (with and without shard prefix). parent an instance of ShardRange. True if parent is the parent of this shard range, False otherwise, assuming that they are within the same account. Returns a path for a shard container that is valid to use as a name when constructing a ShardRange. shards_account the hidden internal account to which the shard container belongs. root_container the name of the root container for the shard. parent_container the name of the parent container for the shard; for initial first generation shards this should be the same as root_container; for shards of shards this should be the name of the sharding shard container. timestamp an instance of Timestamp index a unique index that will distinguish the path from any other path generated using the same combination of shardsaccount, rootcontainer, parent_container and timestamp. a string of the form <accountname>/<containername> Given a value that may be either the name or the number of a state return a tuple of (state number, state name). state Either a string state name or an integer state number. A tuple (state number, state name) ValueError if state is neither a valid state name nor a valid state number. Returns the total number of rows in the shard range i.e. the sum of objects and tombstones. the row count Mark the shard range deleted and set timestamp to the current time. timestamp optional timestamp to set; if not given the current time will be set. True if the deleted attribute or timestamp was changed, False otherwise Set the object stats metadata to the given values and update the meta_timestamp to the current time. object_count should be an integer bytes_used should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if objectcount or bytesused cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Set state to the given value and optionally update the state_timestamp to the given time. state new state, should be an integer state_timestamp timestamp for state; if not given the state_timestamp will not be changed. True if the state or state_timestamp was changed, False otherwise Set the tombstones metadata to the given values and update the meta_timestamp to the current time. tombstones should be an integer meta_timestamp timestamp for metadata; if not given the current time will be set. ValueError if tombstones cannot be cast to an int, or if meta_timestamp is neither None nor can be cast to a Timestamp. Bases: UserList This class provides some convenience functions for working with lists of ShardRange. This class does not enforce ordering or continuity of the list items: callers should ensure that items are added in order as appropriate. Returns the total number of bytes in all items in the list. total bytes used Filter the list for those shard ranges whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all shard ranges will be returned. includes a string; if not empty then only the shard range, if any, whose namespace includes this string will be returned, and marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be" }, { "data": "A new instance of ShardRangeList containing the filtered shard ranges. Finds the first shard range satisfies the given condition and returns its lower bound. condition A function that must accept a single argument of type ShardRange and return True if the shard range satisfies the condition or False otherwise. The lower bound of the first shard range to satisfy the condition, or the upper value of this list if no such shard range is found. Check if another ShardRange namespace is enclosed between the lists lower and upper properties. Note: the lists lower and upper properties will only equal the outermost bounds of all items in the list if the list has previously been sorted. Note: the list does not need to contain an item matching other for this method to return True, although if the list has been sorted and does contain an item matching other then the method will return True. other an instance of ShardRange True if others namespace is enclosed, False otherwise. Returns the lower bound of the first item in the list. Note: this will only be equal to the lowest bound of all items in the list if the list contents has been sorted. lower bound of first item in the list, or Namespace.MIN if the list is empty. Returns the total number of objects of all items in the list. total object count Returns the total number of rows of all items in the list. total row count Returns the upper bound of the last item in the list. Note: this will only be equal to the uppermost bound of all items in the list if the list has previously been sorted. upper bound of last item in the list, or Namespace.MIN if the list is empty. Bases: object Takes an iterator yielding sliceable things (e.g. strings or lists) and yields subiterators, each yielding up to the requested number of items from the source. ``` >> si = Spliterator([\"abcde\", \"fg\", \"hijkl\"]) >> ''.join(si.take(4)) \"abcd\" >> ''.join(si.take(3)) \"efg\" >> ''.join(si.take(1)) \"h\" >> ''.join(si.take(3)) \"ijk\" >> ''.join(si.take(3)) \"l\" # shorter than requested; this can happen with the last iterator ``` Bases: GreenAsyncPile Runs jobs in a pool of green threads, spawning more jobs as results are retrieved and worker threads become available. When used as a context manager, has the same worker-killing properties as ContextPool. This is the same as itertools.starmap(), except that func is executed in a separate green thread for each item, and results wont necessarily have the same order as inputs. Bases: ClosingIterator This iterator wraps and iterates over a first iterator until it stops, and then iterates a second iterator, expecting it to stop immediately. This stringing along of the second iterator is useful when the exit of the second iterator must be delayed until the first iterator has stopped. For example, when the second iterator has already yielded its item(s) but has resources that mustnt be garbage collected until the first iterator has stopped. The second iterator is expected to have no more items and raise StopIteration when called. If this is not the case then unexpecteditemsfunc is called. iterable a first iterator that is wrapped and iterated. other_iter a second iterator that is stopped once the first iterator has stopped. unexpecteditemsfunc a no-arg function that will be called if the second iterator is found to have remaining items. Bases: object Implements a watchdog to efficiently manage concurrent timeouts. Compared to" }, { "data": "it reduces the number of context switching in eventlet by avoiding to schedule actions (throw an Exception), then unschedule them if the timeouts are cancelled. => wathdog greenlet sleeps 10 seconds watchdog greenlet watchdog greenlet to calculate a new sleep period wake up for the 1st timeout expiration Stop the watchdog greenthread. Start the watchdog greenthread. Schedule a timeout action timeout duration before the timeout expires exc exception to throw when the timeout expire, must inherit from eventlet.Timeout timeout_at allow to force the expiration timestamp id of the scheduled timeout, needed to cancel it Cancel a scheduled timeout key timeout id, as returned by start() Bases: object Context manager to schedule a timeout in a Watchdog instance Given a devices path and a data directory, yield (path, device, partition) for all files in that directory (devices|partitions|suffixes|hashes)_filter are meant to modify the list of elements that will be iterated. eg: they can be used to exclude some elements based on a custom condition defined by the caller. hookpre(device|partition|suffix|hash) are called before yielding the element, hookpos(device|partition|suffix|hash) are called after the element was yielded. They are meant to do some pre/post processing. eg: saving a progress status. devices parent directory of the devices to be audited datadir a directory located under self.devices. This should be one of the DATADIR constants defined in the account, container, and object servers. suffix path name suffix required for all names returned (ignored if yieldhashdirs is True) mount_check Flag to check if a mount check should be performed on devices logger a logger object devices_filter a callable taking (devices, [list of devices]) as parameters and returning a [list of devices] partitionsfilter a callable taking (datadirpath, [list of parts]) as parameters and returning a [list of parts] suffixesfilter a callable taking (partpath, [list of suffixes]) as parameters and returning a [list of suffixes] hashesfilter a callable taking (suffpath, [list of hashes]) as parameters and returning a [list of hashes] hookpredevice a callable taking device_path as parameter hookpostdevice a callable taking device_path as parameter hookprepartition a callable taking part_path as parameter hookpostpartition a callable taking part_path as parameter hookpresuffix a callable taking suff_path as parameter hookpostsuffix a callable taking suff_path as parameter hookprehash a callable taking hash_path as parameter hookposthash a callable taking hash_path as parameter error_counter a dictionary used to accumulate error counts; may add keys unmounted and unlistable_partitions yieldhashdirs if True, yield hash dirs instead of individual files A generator returning lines from a file starting with the last line, then the second last line, etc. i.e., it reads lines backwards. Stops when the first line (if any) is read. This is useful when searching for recent activity in very large files. f file object to read blocksize no of characters to go backwards at each block Get memcache connection pool from the environment (which had been previously set by the memcache middleware env wsgi environment dict swift.common.memcached.MemcacheRing from environment Like contextlib.closing(), but doesnt crash if the object lacks a close() method. PEP 333 (WSGI) says: If the iterable returned by the application has a close() method, the server or gateway must call that method upon completion of the current request[.] This function makes that easier. Compute an ETA. Now only if we could also have a progress bar start_time Unix timestamp when the operation began current_value Current value final_value Final value ETA as a tuple of (length of time, unit of time) where unit of time is one of (h, m, s) Appends an item to a comma-separated string. If the comma-separated string is empty/None, just returns item. Distribute items as evenly as possible into N" }, { "data": "Takes an iterator of range iters and turns it into an appropriate HTTP response body, whether thats multipart/byteranges or not. This is almost, but not quite, the inverse of requesthelpers.httpresponsetodocument_iters(). This function only yields chunks of the body, not any headers. ranges_iter an iterator of dictionaries, one per range. Each dictionary must contain at least the following key: part_iter: iterator yielding the bytes in the range Additionally, if multipart is True, then the following other keys are required: start_byte: index of the first byte in the range end_byte: index of the last byte in the range content_type: value for the ranges Content-Type header multipart/byteranges case: equal to the response length). If omitted, * will be used. Each partiter will be exhausted prior to calling next(rangesiter). boundary MIME boundary to use, sans dashes (e.g. boundary, not boundary). multipart True if the response should be multipart/byteranges, False otherwise. This should be True if and only if you have 2 or more ranges. logger a logger Takes an iterator of range iters and yields a multipart/byteranges MIME document suitable for sending as the body of a multi-range 206 response. See documentiterstohttpresponse_body for parameter descriptions. Drain and close a swob or WSGI response. This ensures we dont log a 499 in the proxy just because we realized we dont care about the body of an error. Sets the userid/groupid of the current process, get session leader, etc. user User name to change privileges to Update recon cache values cache_dict Dictionary of cache key/value pairs to write out cache_file cache file to update logger the logger to use to log an encountered error lock_timeout timeout (in seconds) set_owner Set owner of recon cache file Install the appropriate Eventlet monkey patches. the contenttype string minus any swiftbytes param, the swift_bytes value or None if the param was not found content_type a content-type string a tuple of (content-type, swift_bytes or None) Pre-allocate disk space for a file. This function can be disabled by calling disable_fallocate(). If no suitable C function is available in libc, this function is a no-op. fd file descriptor size size to allocate (in bytes) Sync modified file data to disk. fd file descriptor Filter the given Namespaces/ShardRanges to those whose namespace includes the includes name or any part of the namespace between marker and endmarker. If none of includes, marker or endmarker are specified then all Namespaces will be returned. namespaces A list of Namespace or ShardRange. includes a string; if not empty then only the Namespace, if any, whose namespace includes this string will be returned, marker and end_marker will be ignored. marker if specified then only shard ranges whose upper bound is greater than this value will be returned. end_marker if specified then only shard ranges whose lower bound is less than this value will be returned. A filtered list of Namespace. Find a Namespace/ShardRange in given list of namespaces whose namespace contains item. item The item for a which a Namespace is to be found. ranges a sorted list of Namespaces. the Namespace/ShardRange whose namespace contains item, or None if no suitable Namespace is found. Close a swob or WSGI response and maybe drain it. Its basically free to read a HEAD or HTTPException response - the bytes are probably already in our network buffers. For a larger response we could possibly burn a lot of CPU/network trying to drain an un-used" }, { "data": "This method will read up to DEFAULTDRAINLIMIT bytes to avoid logging a 499 in the proxy when it would otherwise be easy to just throw away the small/empty body. Check to see whether or not a filesystem has the given amount of space free. Unlike fallocate(), this does not reserve any space. fspathor_fd path to a file or directory on the filesystem, or an open file descriptor; if a directory, typically the path to the filesystems mount point space_needed minimum bytes or percentage of free space ispercent if True, then spaceneeded is treated as a percentage of the filesystems capacity; if False, space_needed is a number of free bytes. True if the filesystem has at least that much free space, False otherwise OSError if fs_path does not exist Sync modified file data and metadata to disk. fd file descriptor Sync directory entries to disk. dirpath Path to the directory to be synced. Given the path to a db file, return a sorted list of all valid db files that actually exist in that paths dir. A valid db filename has the form: ``` <hash>[_<epoch>].db ``` where <hash> matches the <hash> part of the given db_path as would be parsed by parsedbfilename(). db_path Path to a db file that does not necessarily exist. List of valid db files that do exist in the dir of the db_path. This list may be empty. Returns an expiring object container name for given X-Delete-At and (native string) a/c/o. Checks whether poll is available and falls back on select if it isnt. Note about epoll: Review: https://review.opendev.org/#/c/18806/ There was a problem where once out of every 30 quadrillion connections, a coroutine wouldnt wake up when the client closed its end. Epoll was not reporting the event or it was getting swallowed somewhere. Then when that file descriptor was re-used, eventlet would freak right out because it still thought it was waiting for activity from it in some other coro. Another note about epoll: its hard to use when forking. epoll works like so: create an epoll instance: efd = epoll_create(...) register file descriptors of interest with epollctl(efd, EPOLLCTL_ADD, fd, ...) wait for events with epoll_wait(efd, ...) If you fork, you and all your child processes end up using the same epoll instance, and everyone becomes confused. It is possible to use epoll and fork and still have a correct program as long as you do the right things, but eventlet doesnt do those things. Really, it cant even try to do those things since it doesnt get notified of forks. In contrast, both poll() and select() specify the set of interesting file descriptors with each call, so theres no problem with forking. As eventlet monkey patching is now done before call get_hub() in wsgi.py if we use import select we get the eventlet version, but since version 0.20.0 eventlet removed select.poll() function in patched select (see: http://eventlet.net/doc/changelog.html and https://github.com/eventlet/eventlet/commit/614a20462). We use eventlet.patcher.original function to get python select module to test if poll() is available on platform. Return partition number for given hex hash and partition power. :param hex_hash: A hash string :param part_power: partition power :returns: partition number devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir the (integer) partition from the path Extract a redirect location from a responses headers. response a response a tuple of (path, Timestamp) if a Location header is found, otherwise None ValueError if the Location header is found but a X-Backend-Redirect-Timestamp is not found, or if there is a problem with the format of etiher header Get a nomralized length of time in the largest unit of time (hours, minutes, or" }, { "data": "time_amount length of time in seconds A touple of (length of time, unit of time) where unit of time is one of (h, m, s) This allows the caller to make a list of things with indexes, where the first item (zero indexed) is just the bare base string, and subsequent indexes are appended -1, -2, etc. e.g.: ``` 'lock', None => 'lock' 'lock', 0 => 'lock' 'lock', 1 => 'lock-1' 'object', 2 => 'object-2' ``` base a string, the base string; when index is 0 (or None) this is the identity function. index a digit, typically an integer (or None); for values other than 0 or None this digit is appended to the base string separated by a hyphen. Get the canonical hash for an account/container/object account Account container Container object Object raw_digest If True, return the raw version rather than a hex digest hash string Returns the number in a human readable format; for example 1048576 = 1Mi. Test if a file mtime is older than the given age, suppressing any OSErrors. path first and only argument passed to os.stat age age in seconds True if age is less than or equal to zero or if the file mtime is more than age in the past; False if age is greater than zero and the file mtime is less than or equal to age in the past or if there is an OSError while stating the file. Test whether a path is a mount point. This will catch any exceptions and translate them into a False return value Use ismount_raw to have the exceptions raised instead. Test whether a path is a mount point. Whereas ismount will catch any exceptions and just return False, this raw version will not catch exceptions. This is code hijacked from C Python 2.6.8, adapted to remove the extra lstat() system call. Get a value from the wsgi environment env wsgi environment dict item_name name of item to get the value from the environment Given a multi-part-mime-encoded input file object and boundary, yield file-like objects for each part. Note that this does not split each part into headers and body; the caller is responsible for doing that if necessary. wsgi_input The file-like object to read from. boundary The mime boundary to separate new file-like objects on. A generator of file-like objects for each part. MimeInvalid if the document is malformed Creates a link to file descriptor at target_path specified. This method does not close the fd for you. Unlike rename, as linkat() cannot overwrite target_path if it exists, we unlink and try again. Attempts to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. fd File descriptor to be linked target_path Path in filesystem where fd is to be linked dirs_created Number of newly created directories that needs to be fsyncd. retries number of retries to make fsync fsync on containing directory of target_path and also all the newly created directories. Splits the str given and returns a properly stripped list of the comma separated values. Load a recon cache file. Treats missing file as empty. Context manager that acquires a lock on a file. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file to be locked timeout timeout (in" }, { "data": "If None, defaults to DEFAULTLOCKTIMEOUT append True if file should be opened in append mode unlink True if the file should be unlinked at the end Context manager that acquires a lock on the parent directory of the given file path. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). filename file path of the parent directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT Context manager that acquires a lock on a directory. This will block until the lock can be acquired, or the timeout time has expired (whichever occurs first). For locking exclusively, file or directory has to be opened in Write mode. Python doesnt allow directories to be opened in Write Mode. So we workaround by locking a hidden file in the directory. directory directory to be locked timeout timeout (in seconds). If None, defaults to DEFAULTLOCKTIMEOUT timeout_class The class of the exception to raise if the lock cannot be granted within the timeout. Will be constructed as timeout_class(timeout, lockpath). Default: LockTimeout limit The maximum number of locks that may be held concurrently on the same directory at the time this method is called. Note that this limit is only applied during the current call to this method and does not prevent subsequent calls giving a larger limit. Defaults to 1. name A string to distinguishes different type of locks in a directory TypeError if limit is not an int. ValueError if limit is less than 1. Given a path to a db file, return a modified path whose filename part has the given epoch. A db filename takes the form <hash>[_<epoch>].db; this method replaces the <epoch> part of the given db_path with the given epoch value, or drops the epoch part if the given epoch is None. db_path Path to a db file that does not necessarily exist. epoch A string (or None) that will be used as the epoch in the new paths filename; non-None values will be normalized to the normal string representation of a Timestamp. A modified path to a db file. ValueError if the epoch is not valid for constructing a Timestamp. Same as os.makedirs() except that this method returns the number of new directories that had to be created. Also, this does not raise an error if target directory already exists. This behaviour is similar to Python 3.xs os.makedirs() called with exist_ok=True. Also similar to swift.common.utils.mkdirs() https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212 Takes an iterator that may or may not contain a multipart MIME document as well as content type and returns an iterator of body iterators. app_iter iterator that may contain a multipart MIME document contenttype content type of the appiter, used to determine whether it conains a multipart document and, if so, what the boundary is between documents Get the MD5 checksum of a file. fname path to file MD5 checksum, hex encoded Returns a decorator that logs timing events or errors for public methods in MemcacheRing class, such as memcached set, get and etc. Takes a file-like object containing a multipart MIME document and returns an iterator of (headers, body-file) tuples. input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Ensures the path is a directory or makes it if not. Errors if the path exists but is a file or on permissions failure. path path to create Apply all swift monkey patching consistently in one place. Takes a file-like object containing a multipart/byteranges MIME document (see RFC 7233, Appendix A) and returns an iterator of (first-byte, last-byte, length, document-headers, body-file)" }, { "data": "input_file file-like object with the MIME doc in it boundary MIME boundary, sans dashes (e.g. divider, not divider) readchunksize size of strings read via input_file.read() Get a string representation of a nodes location. node_dict a dict describing a node replication if True then the replication ip address and port are used, otherwise the normal ip address and port are used. a string of the form <ip address>:<port>/<device> Takes a dict from a container listing and overrides the content_type, bytes fields if swift_bytes is set. Returns an iterator of all pairs of elements from item_list. item_list items (no duplicates allowed) Given the value of a header like: Content-Disposition: form-data; name=somefile; filename=test.html Return data like (form-data, {name: somefile, filename: test.html}) header Value of a header (the part after the : ). (value name, dict) of the attribute data parsed (see above). Parse a content-range header into (firstbyte, lastbyte, total_size). See RFC 7233 section 4.2 for details on the header format, but its basically Content-Range: bytes ${start}-${end}/${total}. content_range Content-Range header value to parse, e.g. bytes 100-1249/49004 3-tuple (start, end, total) ValueError if malformed Parse a content-type and its parameters into values. RFC 2616 sec 14.17 and 3.7 are pertinent. Examples: ``` 'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')]) 'text/plain; charset=UTF-8; level=1' -> ('text/plain', [('charset, 'UTF-8'), ('level', '1')]) ``` contenttype contenttype to parse a tuple containing (content type, list of k, v parameter tuples) Splits a db filename into three parts: the hash, the epoch, and the extension. ``` >> parsedbfilename(\"ab2134.db\") ('ab2134', None, '.db') >> parsedbfilename(\"ab2134_1234567890.12345.db\") ('ab2134', '1234567890.12345', '.db') ``` filename A db file basename or path to a db file. A tuple of (hash , epoch, extension). epoch may be None. ValueError if filename is not a path to a file. Takes a file-like object containing a MIME document and returns a HeaderKeyDict containing the headers. The body of the message is not consumed: the position in doc_file is left at the beginning of the body. This function was inspired by the Python standard librarys http.client.parse_headers. doc_file binary file-like object containing a MIME document a swift.common.swob.HeaderKeyDict containing the headers Parse standard swift server/daemon options with optparse.OptionParser. parser OptionParser to use. If not sent one will be created. once Boolean indicating the once option is available test_config Boolean indicating the test-config option is available test_args Override sys.argv; used in testing Tuple of (config, options); config is an absolute path to the config file, options is the parser options as a dictionary. SystemExit First arg (CONFIG) is required, file must exist Figure out which policies, devices, and partitions we should operate on, based on kwargs. If override_policies is already present in kwargs, then return that value. This happens when using multiple worker processes; the parent process supplies override_policies=X to each child process. Otherwise, in run-once mode, look at the policies keyword argument. This is the value of the policies command-line option. In run-forever mode or if no policies option was provided, an empty list will be returned. The procedures for devices and partitions are similar. a named tuple with fields devices, partitions, and policies. Decorator to declare which methods are privately accessible as HTTP requests with an X-Backend-Allow-Private-Methods: True override func function to make private Decorator to declare which methods are publicly accessible as HTTP requests func function to make public De-allocate disk space in the middle of a file. fd file descriptor offset index of first byte to de-allocate length number of bytes to de-allocate Update a recon cache entry item. If item is an empty dict then any existing key in cache_entry will be" }, { "data": "Similarly if item is a dict and any of its values are empty dicts then the corresponding key will be deleted from the nested dict in cache_entry. We use nested recon cache entries when the object auditor runs in parallel or else in once mode with a specified subset of devices. cache_entry a dict of existing cache entries key key for item to update item value for item to update quorum size as it applies to services that use replication for data integrity (Account/Container services). Object quorum_size is defined on a storage policy basis. Number of successful backend requests needed for the proxy to consider the client request successful. Will eventlet.sleep() for the appropriate time so that the max_rate is never exceeded. If max_rate is 0, will not ratelimit. The maximum recommended rate should not exceed (1000 * incr_by) a second as eventlet.sleep() does involve some overhead. Returns running_time that should be used for subsequent calls. running_time the running time in milliseconds of the next allowable request. Best to start at zero. max_rate The maximum rate per second allowed for the process. incr_by How much to increment the counter. Useful if you want to ratelimit 1024 bytes/sec and have differing sizes of requests. Must be > 0 to engage rate-limiting behavior. rate_buffer Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. Must be > 0 to engage rate-limiting behavior. The absolute time for the next interval in milliseconds; note that time could have passed well beyond that point, but the next call will catch that and skip the sleep. Consume the first truthy item from an iterator, then re-chain it to the rest of the iterator. This is useful when you want to make sure the prologue to downstream generators have been executed before continuing. :param iterable: an iterable object Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored path first and only argument passed to os.rmdir Quiet wrapper for os.unlink, OSErrors are suppressed path first and only argument passed to os.unlink Attempt to fix / hide race conditions like empty object directories being removed by backend processes during uploads, by retrying. The containing directory of new and of all newly created directories are fsyncd by default. This will come at a performance penalty. In cases where these additional fsyncs are not necessary, it is expected that the caller of renamer() turn it off explicitly. old old path to be renamed new new path to be renamed to fsync fsync on containing directory of new and also all the newly created directories. Takes a path and a partition power and returns the same path, but with the correct partition number. Most useful when increasing the partition power. devices directory where devices are mounted (e.g. /srv/node) path full path to a object file or hashdir part_power partition power to compute correct partition number Path with re-computed partition power Decorator to declare which methods are accessible for different type of servers: If option replication_server is None then this decorator doesnt matter. If option replication_server is True then ONLY decorated with this decorator methods will be started. If option replication_server is False then decorated with this decorator methods will NOT be started. func function to mark accessible for replication Takes a list of iterators, yield an element from each in a round-robin fashion until all of them are" }, { "data": ":param its: list of iterators Transform ip string to an rsync-compatible form Will return ipv4 addresses unchanged, but will nest ipv6 addresses inside square brackets. ip an ip string (ipv4 or ipv6) a string ip address Interpolate devices variables inside a rsync module template template rsync module template as a string device a device from a ring a string with all variables replaced by device attributes Look in root, for any files/dirs matching glob, recursively traversing any found directories looking for files ending with ext root start of search path glob_match glob to match in root, matching dirs are traversed with os.walk ext only files that end in ext will be returned exts a list of file extensions; only files that end in one of these extensions will be returned; if set this list overrides any extension specified using the ext param. dirext if present directories that end with dirext will not be traversed and instead will be returned as a matched path list of full paths to matching files, sorted Get the ip address and port that should be used for the given node_dict. If use_replication is True then the replication ip address and port are returned. If use_replication is False (the default) and the node dict has an item with key use_replication then that items value will determine if the replication ip address and port are returned. If neither usereplication nor nodedict['use_replication'] indicate otherwise then the normal ip address and port are returned. node_dict a dict describing a node use_replication if True then the replication ip address and port are returned. a tuple of (ip address, port) Sets the directory from which swift config files will be read. If the given directory differs from that already set then the swift.conf file in the new directory will be validated and storage policies will be reloaded from the new swift.conf file. swift_dir non-default directory to read swift.conf from Get the storage directory datadir Base data directory partition Partition name_hash Account, container or object name hash Storage directory Constant-time string comparison. the first string the second string True if the strings are equal. This function takes two strings and compares them. It is intended to be used when doing a comparison for authentication purposes to help guard against timing attacks. Validate and decode Base64-encoded data. The stdlib base64 module silently discards bad characters, but we often want to treat them as an error. value some base64-encoded data allowlinebreaks if True, ignore carriage returns and newlines the decoded data ValueError if value is not a string, contains invalid characters, or has insufficient padding Send systemd-compatible notifications. Notify the service manager that started this process, if it has set the NOTIFY_SOCKET environment variable. For example, systemd will set this when the unit has Type=notify. More information can be found in systemd documentation: https://www.freedesktop.org/software/systemd/man/sd_notify.html Common messages include: ``` READY=1 RELOADING=1 STOPPING=1 STATUS=<some string> ``` logger a logger object msg the message to send Returns a decorator that logs timing events or errors for public methods in swifts wsgi server controllers, based on response code. Remove any file in a given path that was last modified before mtime. path path to remove file from mtime timestamp of oldest file to keep Remove any files from the given list that were last modified before mtime. filepaths a list of strings, the full paths of files to check mtime timestamp of oldest file to keep Validate that a device and a partition are valid and wont lead to directory traversal when" }, { "data": "device device to validate partition partition to validate ValueError if given an invalid device or partition Validates an X-Container-Sync-To header value, returning the validated endpoint, realm, and realm_key, or an error string. value The X-Container-Sync-To header value to validate. allowedsynchosts A list of allowed hosts in endpoints, if realms_conf does not apply. realms_conf An instance of swift.common.containersyncrealms.ContainerSyncRealms to validate against. A tuple of (errorstring, validatedendpoint, realm, realmkey). The errorstring will None if the rest of the values have been validated. The validated_endpoint will be the validated endpoint to sync to. The realm and realm_key will be set if validation was done through realms_conf. Write contents to file at path path any path, subdirs will be created as needed contents data to write to file, will be converted to string Ensure that a pickle file gets written to disk. The file is first written to a tmp location, ensure it is synced to disk, then perform a move to its final location obj python object to be pickled dest path of final destination file tmp path to tmp to use, defaults to None pickle_protocol protocol to pickle the obj with, defaults to 0 WSGI tools for use with swift. Bases: NamedConfigLoader Read configuration from multiple files under the given path. Bases: Exception Bases: ConfigFileError Bases: NamedConfigLoader Wrap a raw config string up for paste.deploy. If you give one of these to our loadcontext (e.g. give it to our appconfig) well intercept it and get it routed to the right loader. Bases: ConfigLoader Patch paste.deploys ConfigLoader so each context object will know what config section it came from. Bases: object This class provides a number of utility methods for modifying the composition of a wsgi pipeline. Creates a context for a filter that can subsequently be added to a pipeline context. entrypointname entry point of the middleware (Swift only) a filter context Returns the first index of the given entry point name in the pipeline. Raises ValueError if the given module is not in the pipeline. Inserts a filter module into the pipeline context. ctx the context to be inserted index (optional) index at which filter should be inserted in the list of pipeline filters. Default is 0, which means the start of the pipeline. Tests if the pipeline starts with the given entry point name. entrypointname entry point of middleware or app (Swift only) True if entrypointname is first in pipeline, False otherwise Bases: GreenPool Works the same as GreenPool, but if the size is specified as one, then the spawn_n() method will invoke waitall() before returning to prevent the caller from doing any other work (like calling accept()). Create a greenthread to run the function, the same as spawn(). The difference is that spawn_n() returns None; the results of function are not retrievable. Bases: StrategyBase WSGI server management strategy object for an object-server with one listen port per unique local port in the storage policy rings. The serversperport integer config setting determines how many workers are run per port. Tracking data is a map like port -> [(pid, socket), ...]. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. serversperport (int) The number of workers to run per port. Yields all known listen sockets. Log a servers exit. Return timeout before checking for reloaded rings. The time to wait for a child to exit before checking for reloaded rings (new ports). Yield a sequence of (socket, (port, server_idx)) tuples for each server which should be forked-off and" }, { "data": "Any sockets for orphaned ports no longer in any ring will be closed (causing their associated workers to gracefully exit) after all new sockets have been yielded. The server_idx item for each socket will passed into the logsockexit() and registerworkerstart() methods. This strategy does not support running in the foreground. Called when a worker has exited. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. data (tuple) The sockets (port, server_idx) as yielded by newworkersocks(). pid (int) The new worker process PID Bases: object Some operations common to all strategy classes. Called in each forked-off child process, prior to starting the actual wsgi server, to perform any initialization such as drop privileges. Set the close-on-exec flag on any listen sockets. Shutdown any listen sockets. Signal that the server is up and accepting connections. Bases: object This class provides a means to provide context (scope) for a middleware filter to have access to the wsgi start_response results like the request status and headers. Bases: StrategyBase WSGI server management strategy object for a single bind port and listen socket shared by a configured number of forked-off workers. Tracking data is a map of pid -> socket. Used in run_wsgi(). conf (dict) Server configuration dictionary. logger The servers LogAdaptor object. Yields all known listen sockets. Log a servers exit. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). We want to keep from busy-waiting, but we also need a non-None value so the main loop gets a chance to tell whether it should keep running or not (e.g. SIGHUP received). So we return 0.5. Yield a sequence of (socket, opqaue_data) tuples for each server which should be forked-off and started. The opaque_data item for each socket will passed into the logsockexit() and registerworkerstart() methods where it will be ignored. Return a server listen socket if the server should run in the foreground (no fork). Called when a worker has exited. NOTE: a re-execed server can reap the dead worker PIDs from the old server process that is being replaced as part of a service reload (SIGUSR1). So we need to be robust to getting some unknown PID here. pid (int) The PID of the worker that exited. Called when a new worker is started. sock (socket) The listen socket for the worker just started. unused The sockets opaquedata yielded by newworkersocks(). pid (int) The new worker process PID Bind socket to bind ip:port in conf conf Configuration dict to read settings from a socket object as returned from socket.listen or ssl.wrapsocket if conf specifies certfile Loads common settings from conf Sets the logger Loads the request processor conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from the loaded application entry point ConfigFileError Exception is raised for config file error Read the app config section from a config file. conf_file path to a config file a dict Loads a context from a config file, and if the context is a pipeline then presents the app with the opportunity to modify the pipeline. conf_file path to a config file global_conf a dict of options to update the loaded config. Options in globalconf will override those in conffile except where the conf_file option is preceded by set. allowmodifypipeline if True, and the context is a pipeline, and the loaded app has a modifywsgipipeline property, then that property will be called before the pipeline is loaded. the loaded app Returns a new fresh WSGI" }, { "data": "env The WSGI environment to base the new environment on. method The new REQUEST_METHOD or None to use the original. path The new path_info or none to use the original. path should NOT be quoted. When building a url, a Webob Request (in accordance with wsgi spec) will quote env[PATHINFO]. url += quote(environ[PATHINFO]) querystring The new querystring or none to use the original. When building a url, a Webob Request will append the query string directly to the url. url += ? + env[QUERY_STRING] agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. Fresh WSGI environment. Same as make_env() but with preauthorization. Same as make_subrequest() but with preauthorization. Makes a new swob.Request based on the current env but with the parameters specified. env The WSGI environment to base the new request on. method HTTP method of new request; default is from the original env. path HTTP path of new request; default is from the original env. path should be compatible with what you would send to Request.blank. path should be quoted and it can include a query string. for example: /a%20space?unicode_str%E8%AA%9E=y%20es body HTTP body of new request; empty by default. headers Extra HTTP headers of new request; None by default. agent The HTTP user agent to use; default Swift. You can put %(orig)s in the agent to have it replaced with the original envs HTTPUSERAGENT, such as %(orig)s StaticWeb. You also set agent to None to use the original envs HTTPUSERAGENT or to have no HTTPUSERAGENT. swift_source Used to mark the request as originating out of middleware. Will be logged in proxy logs. makeenv makesubrequest calls this make_env to help build the swob.Request. Fresh swob.Request object. Runs the server according to some strategy. The default strategy runs a specified number of workers in pre-fork model. The object-server (only) may use a servers-per-port strategy if its config has a serversperport setting with a value greater than zero. conf_path Path to paste.deploy style configuration file/directory app_section App name from conf file to load config from allowmodifypipeline Boolean for whether the server should have an opportunity to change its own pipeline. Defaults to True test_config if False (the default) then load and validate the config and if successful then continue to run the server; if True then load and validate the config but do not run the server. 0 if successful, nonzero otherwise Wrap a function whos first argument is a paste.deploy style config uri, such that you can pass it an un-adorned raw filesystem path (or config string) and the config directive (either config:, config_dir:, or config_str:) will be added automatically based on the type of entity (either a file or directory, or if no such entity on the file system - just a string) before passing it through to the paste.deploy function. Bases: object Represents a storage policy. Not meant to be instantiated directly; implement a derived subclasses (e.g. StoragePolicy, ECStoragePolicy, etc) or use reloadstoragepolicies() to load POLICIES from swift.conf. The objectring property is lazy loaded once the services swiftdir is known via getobjectring(), but it may be over-ridden via object_ring kwarg at create time for testing or actively loaded with load_ring(). Adds an alias name to the storage" }, { "data": "Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. name a new alias for the storage policy Changes the primary/default name of the policy to a specified name. name a string name to replace the current primary name. Return an instance of the diskfile manager class configured for this storage policy. args positional args to pass to the diskfile manager constructor. kwargs keyword args to pass to the diskfile manager constructor. A disk file manager instance. Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Load the ring for this policy immediately. swift_dir path to rings reload_time time interval in seconds to check for a ring change Number of successful backend requests needed for the proxy to consider the client request successful. Decorator for Storage Policy implementations to register their StoragePolicy class. This will also set the policy_type attribute on the registered implementation. Removes an alias name from the storage policy. Shouldnt be called directly from the storage policy but instead through the storage policy collection class, so lookups by name resolve correctly. If the name removed is the primary name then the next available alias will be adopted as the new primary name. name a name assigned to the storage policy Validation hook used when loading the ring; currently only used for EC Bases: BaseStoragePolicy Represents a storage policy of type erasure_coding. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. This short hand form of the important parts of the ec schema is stored in Object System Metadata on the EC Fragment Archives for debugging. Maximum length of a fragment, including header. NB: a fragment archive is a sequence of 0 or more max-length fragments followed by one possibly-shorter fragment. Backend index for PyECLib node_index integer of node index integer of actual fragment index. if param is not an integer, return None instead Return the info dict and conf file options for this policy. config boolean, if True all config options are returned Number of successful backend requests needed for the proxy to consider the client PUT request successful. The quorum size for EC policies defines the minimum number of data + parity elements required to be able to guarantee the desired fault tolerance, which is the number of data elements supplemented by the minimum number of parity elements required by the chosen erasure coding scheme. For example, for Reed-Solomon, the minimum number parity elements required is 1, and thus the quorum_size requirement is ec_ndata + 1. Given the number of parity elements required is not the same for every erasure coding scheme, consult PyECLib for minparityfragments_needed() EC specific validation Replica count check - we need atleast_ (#data + #parity) replicas configured. Also if the replica count is larger than exactly that number theres a non-zero risk of error for code that is considering the number of nodes in the primary list from the ring. Bases: ValueError Bases: BaseStoragePolicy Represents a storage policy of type replication. Default storage policy class unless otherwise overridden from swift.conf. Not meant to be instantiated directly; use reloadstoragepolicies() to load POLICIES from swift.conf. floor(number of replica / 2) + 1 Bases: object This class represents the collection of valid storage policies for the cluster and is instantiated as StoragePolicy objects are added to the collection when swift.conf is parsed by" }, { "data": "When a StoragePolicyCollection is created, the following validation is enforced: If a policy with index 0 is not declared and no other policies defined, Swift will create one The policy index must be a non-negative integer If no policy is declared as the default and no other policies are defined, the policy with index 0 is set as the default Policy indexes must be unique Policy names are required Policy names are case insensitive Policy names must contain only letters, digits or a dash Policy names must be unique The policy name Policy-0 can only be used for the policy with index 0 If any policies are defined, exactly one policy must be declared default Deprecated policies can not be declared the default Adds a new name or names to a policy policy_index index of a policy in this policy collection. aliases arbitrary number of string policy names to add. Changes the primary or default name of a policy. The new primary name can be an alias that already belongs to the policy or a completely new name. policy_index index of a policy in this policy collection. new_name a string name to set as the new default name. Find a storage policy by its index. An index of None will be treated as 0. index numeric index of the storage policy storage policy, or None if no such policy Find a storage policy by its name. name name of the policy storage policy, or None Get the ring object to use to handle a request based on its policy. An index of None will be treated as 0. policy_idx policy index as defined in swift.conf swiftdir swiftdir used by the caller appropriate ring object Build info about policies for the /info endpoint list of dicts containing relevant policy information Removes a name or names from a policy. If the name removed is the primary name then the next available alias will be adopted as the new primary name. aliases arbitrary number of existing policy names to remove. Bases: object An instance of this class is the primary interface to storage policies exposed as a module level global named POLICIES. This global reference wraps _POLICIES which is normally instantiated by parsing swift.conf and will result in an instance of StoragePolicyCollection. You should never patch this instance directly, instead patch the module level _POLICIES instance so that swift code which imported POLICIES directly will reference the patched StoragePolicyCollection. Helper function to construct a string from a base and the policy. Used to encode the policy index into either a file name or a directory name by various modules. base the base string policyorindex StoragePolicy instance, or an index (string or int), if None the legacy storage Policy-0 is assumed. base name with policy index added PolicyError if no policy exists with the given policy_index Parse storage policies in swift.conf - note that validation is done when the StoragePolicyCollection is instantiated. conf ConfigParser parser object for swift.conf Reload POLICIES from swift.conf. Helper function to convert a string representing a base and a policy. Used to decode the policy from either a file name or a directory name by various modules. policy_string base name with policy index added PolicyError if given index does not map to a valid policy a tuple, in the form (base, policy) where base is the base string and policy is the StoragePolicy instance for the index encoded in the policy_string. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license." } ]
{ "category": "Runtime", "file_name": "misc.html#utils.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: object Walk through file system to audit objects Entrypoint to object_audit, with a failsafe generic exception handler. Audits the given object location. location an audit location (from diskfile.objectauditlocation_generator) Based on configs objectsizestats will keep track of how many objects fall into the specified ranges. For example with the following: objectsizestats = 10, 100, 1024 and your system has 3 objects of sizes: 5, 20, and 10000 bytes the log will look like: {10: 1, 100: 1, 1024: 0, OVER: 1} Bases: Daemon Audit objects. Parallel audit loop Clear recon cache entries Child execution Run the object audit Run the object audit until stopped. Run the object audit once Bases: object Run the user-supplied watcher. Simple and gets the job done. Note that we arent doing anything to isolate ourselves from hangs or file descriptor leaks in the plugins. Disk File Interface for the Swift Object Server The DiskFile, DiskFileWriter and DiskFileReader classes combined define the on-disk abstraction layer for supporting the object server REST API interfaces (excluding REPLICATE). Other implementations wishing to provide an alternative backend for the object server must implement the three classes. An example alternative implementation can be found in the memserver.py and memdiskfile.py modules along size this one. The DiskFileManager is a reference implemenation specific class and is not part of the backend API. The remaining methods in this module are considered implementation specific and are also not considered part of the backend API. Bases: object Represents an object location to be audited. Other than being a bucket of data, the only useful thing this does is stringify to a filesystem path so the auditors logs look okay. Bases: object Manage object files. This specific implementation manages object files on a disk formatted with a POSIX-compliant file system that supports extended attributes as metadata on a file or directory. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. The following path format is used for data file locations: <devicespath/<devicedir>/<datadir>/<partdir>/<suffixdir>/<hashdir>/ <datafile>.<ext> mgr associated DiskFileManager instance device_path path to the target device or drive partition partition on the device in which the object lives account account name for the object container container name for the object obj object name for the object _datadir override the full datadir otherwise constructed here policy the StoragePolicy instance use_splice if true, use zero-copy splice() to send data pipe_size size of pipe buffer used in zero-copy operations open_expired if True, open() will not raise a DiskFileExpired if object is expired nextpartpower the next partition power to be used Context manager to create a file. We create a temporary file first, and then return a DiskFileWriter object to encapsulate the state. Note An implementation is not required to perform on-disk preallocations even if the parameter is specified. But if it does and it fails, it must raise a DiskFileNoSpace exception. size optional initial size of file to explicitly allocate on disk extension file extension to use for the newly-created file; defaults to .data for the sake of tests DiskFileNoSpace if a size is specified and allocation fails Delete the object. This implementation creates a tombstone file using the given timestamp, and removes any older versions of the object file. Any file that has an older timestamp than timestamp will be deleted. Note An implementation is free to use or ignore the timestamp parameter. timestamp timestamp to compare with each file DiskFileError this implementation will raise the same errors as the create()" }, { "data": "Provides the timestamp of the newest data file found in the object directory. A Timestamp instance, or None if no data file was found. DiskFileNotOpen if the open() method has not been previously called on this instance. Provide the datafile metadata for a previously opened object as a dictionary. This is metadata that was included when the object was first PUT, and does not include metadata set by any subsequent POST. objects datafile metadata dictionary DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Provide the metadata for a previously opened object as a dictionary. objects metadata dictionary DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Provide the metafile metadata for a previously opened object as a dictionary. This is metadata that was written by a POST and does not include any persistent metadata that was set by the original PUT. objects .meta file metadata dictionary, or None if there is no .meta file DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Open the object. This implementation opens the data file representing the object, reads the associated metadata in the extended attributes, additionally combining metadata from fast-POST .meta files. modernize if set, update this diskfile to the latest format. Currently, this means adding metadata checksums if none are present. current_time Unix time used in checking expiration. If not present, the current time will be used. Note An implementation is allowed to raise any of the following exceptions, but is only required to raise DiskFileNotExist when the object representation does not exist. DiskFileCollision on name mis-match with metadata DiskFileNotExist if the object does not exist DiskFileDeleted if the object was previously deleted DiskFileQuarantined if while reading metadata of the file some data did pass cross checks itself for use as a context manager Return the metadata for an object without requiring the caller to open the object first. current_time Unix time used in checking expiration. If not present, the current time will be used. metadata dictionary for an object DiskFileError this implementation will raise the same errors as the open() method. Return a swift.common.swob.Response class compatible app_iter object as defined by swift.obj.diskfile.DiskFileReader. For this implementation, the responsibility of closing the open file is passed to the swift.obj.diskfile.DiskFileReader object. keep_cache callers preference for keeping data read in the OS buffer cache cooperative_period the period parameter for cooperative yielding during file read quarantinehook 1-arg callable called when obj quarantined; the arg is the reason for quarantine. Default is to ignore it. Not needed by the REST layer. a swift.obj.diskfile.DiskFileReader object Write a block of metadata to an object without requiring the caller to create the object first. Supports fast-POST behavior semantics. metadata dictionary of metadata to be associated with the object DiskFileError this implementation will raise the same errors as the create() method. Bases: object Management class for devices, providing common place for shared parameters and methods not provided by the DiskFile class (which primarily services the object server REST API layer). The get_diskfile() method is how this implementation creates a DiskFile object. Note This class is reference implementation specific and not part of the pluggable on-disk backend API. Note TODO(portante): Not sure what the right name to recommend here, as manager seemed generic enough, though suggestions are welcome. conf caller provided configuration object logger caller provided logger Clean up on-disk files that are obsolete and gather the set of valid on-disk files for an object. hsh_path object hash path frag_index if set, search for a specific fragment index .data file, otherwise accept the first valid" }, { "data": "file a dict that may contain: valid on disk files keyed by their filename extension; a list of obsolete files stored under the key obsolete; a list of files remaining in the directory, reverse sorted, stored under the key files. Take whats in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid. partition_dir absolute path to partition dir containing hashes.pkl and hashes.invalid a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Construct the path to a device without checking if it is mounted. device name of target device full path to the device Return the path to a device, first checking to see if either it is a proper mount point, or at least a directory depending on the mount_check configuration option. device name of target device mount_check whether or not to check mountedness of device. Defaults to bool(self.mount_check). full path to the device, None if the path to the device is not a proper mount point or directory. Returns a BaseDiskFile instance for an object based on the objects partition, path parts and policy. device name of target device partition partition on device in which the object lives account account name for the object container container name for the object obj object name for the object policy the StoragePolicy instance Returns a tuple of (a DiskFile instance for an object at the given object_hash, the basenames of the files in the objects hash dir). Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead. device name of target device partition partition on the device in which the object lives object_hash the hash of an object path policy the StoragePolicy instance DiskFileNotExist if the object does not exist a tuple comprising (an instance of BaseDiskFile, a list of file basenames) Returns a BaseDiskFile instance for an object at the given AuditLocation. audit_location object location to be audited Returns a DiskFile instance for an object at the given object_hash. Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead. device name of target device partition partition on the device in which the object lives object_hash the hash of an object path policy the StoragePolicy instance DiskFileNotExist if the object does not exist an instance of BaseDiskFile device name of target device partition partition name suffixes a list of suffix directories to be recalculated policy the StoragePolicy instance skip_rehash just mark the suffixes dirty; return None a dictionary that maps suffix directories Given a simple list of files names, determine the files that constitute a valid fileset i.e. a set of files that defines the state of an object, and determine the files that are obsolete and could be deleted. Note that some files may fall into neither category. If a file is considered part of a valid fileset then its info dict will be added to the results dict, keyed by <extension>_info. Any files that are no longer required will have their info dicts added to a list stored under the key obsolete. The results dict will always contain entries with keys ts_file, datafile and metafile. Their values will be the fully qualified path to a file of the corresponding type if there is such a file in the valid fileset, or" }, { "data": "files a list of file names. datadir directory name files are from; this is used to construct file paths in the results, but the datadir is not modified by this method. verify if True verify that the ondisk file contract has not been violated, otherwise do not verify. policy storage policy used to store the files. Used to validate fragment indexes for EC policies. ts_file -> path to a .ts file or None data_file -> path to a .data file or None meta_file -> path to a .meta file or None ctype_file -> path to a .meta file or None ts_info -> a file info dict for a .ts file data_info -> a file info dict for a .data file meta_info -> a file info dict for a .meta file ctype_info -> a file info dict for a .meta file which contains the content-type value unexpected -> a list of file paths for unexpected files possible_reclaim -> a list of file info dicts for possible reclaimable files obsolete -> a list of file info dicts for obsolete files Invalidates the hash for a suffix_dir in the partitions hashes file. suffix_dir absolute path to suffix dir whose hash needs invalidating Returns filename for given timestamp. timestamp the object timestamp, an instance of Timestamp ext an optional string representing a file extension to be appended to the returned file name ctype_timestamp an optional content-type timestamp, an instance of Timestamp a file name Yield an AuditLocation for all objects stored under device_dirs. policy the StoragePolicy instance device_dirs directory of target device auditor_type either ALL or ZBF Parse an on disk file name. filename the file name including extension policy storage policy used to store the file a dict, with keys for timestamp, ext and ctype_timestamp: timestamp is a Timestamp ctype_timestamp is a Timestamp or None for .meta files, otherwise None ext is a string, the file extension including the leading dot or the empty string if the filename has no extension. Subclasses may override this method to add further keys to the returned dict. DiskFileError if any part of the filename is not able to be validated. A context manager that will lock on the partition given. device device targeted by the lock request policy policy targeted by the lock request partition partition targeted by the lock request PartitionLockTimeout If the lock on the partition cannot be granted within the configured timeout. Write data describing a container update notification to a pickle file in the async_pending directory. device name of target device account account name for the object container container name for the object obj object name for the object data update data to be written to pickle file timestamp a Timestamp policy the StoragePolicy instance In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it. The path to the device the corrupted file is on. The path to the file you want quarantined. path (str) of directory the file was moved to OSError re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename A context manager that will lock on the partition and, if configured to do so, on the device given. device name of target device policy policy targeted by the replication request partition partition targeted by the replication request ReplicationLockTimeout If the lock on the device cannot be granted within the configured timeout. Yields tuples of (hash_only, timestamps) for object information stored for the given device, partition, and (optionally)" }, { "data": "If suffixes is None, all stored suffixes will be searched for object hashes. Note that if suffixes is not None but empty, such as [], then nothing will be yielded. timestamps is a dict which may contain items mapping: ts_data -> timestamp of data or tombstone file, ts_meta -> timestamp of meta file, if one exists content-type value, if one exists durable -> True if data file at ts_data is durable, False otherwise where timestamps are instances of Timestamp device name of target device partition partition name policy the StoragePolicy instance suffixes optional list of suffix directories to be searched Yields tuples of (fullpath, suffixonly) for suffixes stored on the given device and partition. device name of target device partition partition name policy the StoragePolicy instance Bases: object Encapsulation of the WSGI read context for servicing GET REST API requests. Serves as the context manager object for the swift.obj.diskfile.DiskFile classs swift.obj.diskfile.DiskFile.reader() method. Note The quarantining behavior of this method is considered implementation specific, and is not required of the API. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. fp open file object pointer reference data_file on-disk data file name for the object obj_size verified on-disk size of the object etag expected metadata etag value for entire file diskchunksize size of reads from disk in bytes keepcachesize maximum object size that will be kept in cache device_path on-disk device path, used when quarantining an obj logger logger caller wants this object to use quarantine_hook 1-arg callable called w/reason when quarantined use_splice if true, use zero-copy splice() to send data pipe_size size of pipe buffer used in zero-copy operations diskfile the diskfile creating this DiskFileReader instance keep_cache should resulting reads be kept in the buffer cache cooperative_period the period parameter when does cooperative yielding during file read Returns an iterator over the data file for range (start, stop) Returns an iterator over the data file for a set of ranges Close the open file handle if present. For this specific implementation, this method will handle quarantining the file if necessary. Does some magic with splice() and tee() to move stuff from disk to network without ever touching userspace. wsockfd file descriptor (integer) of the socket out which to send data Bases: object Encapsulation of the write context for servicing PUT REST API requests. Serves as the context manager object for the swift.obj.diskfile.DiskFile classs swift.obj.diskfile.DiskFile.create() method. Note It is the responsibility of the swift.obj.diskfile.DiskFile.create() method context manager to close the open file descriptor. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. name name of object from REST API datadir on-disk directory object will end up in on swift.obj.diskfile.DiskFileWriter.put() fd open file descriptor of temporary file to receive data tmppath full path name of the opened file descriptor bytespersync number bytes written between sync calls diskfile the diskfile creating this DiskFileWriter instance nextpartpower the next partition power to be used extension the file extension to be used; may be used internally to distinguish between PUT/POST/DELETE operations Expose internal stats about written chunks. a tuple, (upload_size, etag) Perform any operations necessary to mark the object as durable. For replication policy type this is a no-op. timestamp object put timestamp, an instance of Timestamp Finalize writing the file on disk. metadata dictionary of metadata to be associated with the object Write a chunk of data to" }, { "data": "All invocations of this method must come before invoking the :func: For this implementation, the data is written into a temporary file. chunk the chunk of data to write as a string object Bases: BaseDiskFile alias of DiskFileReader alias of DiskFileWriter Bases: BaseDiskFileManager alias of DiskFile Bases: BaseDiskFileReader Bases: object Bases: BaseDiskFileWriter Finalize writing the file on disk. metadata dictionary of metadata to be associated with the object Bases: BaseDiskFile Provides the timestamp of the newest durable file found in the object directory. A Timestamp instance, or None if no durable file was found. DiskFileNotOpen if the open() method has not been previously called on this instance. Provides information about all fragments that were found in the object directory, including fragments without a matching durable file, and including any fragment chosen to construct the opened diskfile. A dict mapping <Timestamp instance> -> <list of frag indexes>, or None if the diskfile has not been opened or no fragments were found. Remove a tombstone file matching the specified timestamp or datafile matching the specified timestamp and fragment index from the object directory. This provides the EC reconstructor/ssync process with a way to remove a tombstone or fragment from a handoff node after reverting it to its primary node. The hash will be invalidated, and if empty the hsh_path will be removed immediately. timestamp the object timestamp, an instance of Timestamp frag_index fragment archive index, must be a whole number or None. nondurablepurgedelay only remove a non-durable data file if its been on disk longer than this many seconds. meta_timestamp if not None then remove any meta file with this timestamp alias of ECDiskFileReader alias of ECDiskFileWriter Bases: BaseDiskFileManager alias of ECDiskFile Returns the EC specific filename for given timestamp. timestamp the object timestamp, an instance of Timestamp ext an optional string representing a file extension to be appended to the returned file name frag_index a fragment archive index, used with .data extension only, must be a whole number. ctype_timestamp an optional content-type timestamp, an instance of Timestamp durable if True then include a durable marker in data filename. a file name DiskFileError if ext==.data and the kwarg frag_index is not a whole number Returns timestamp(s) and other info extracted from a policy specific file name. For EC policy the data file name includes a fragment index and possibly a durable marker, both of which must be stripped off to retrieve the timestamp. filename the file name including extension ctype_timestamp: timestamp is a Timestamp frag_index is an int or None ctype_timestamp is a Timestamp or None for .meta files, otherwise None ext is a string, the file extension including the leading dot or the empty string if the filename has no extension durable is a boolean that is True if the filename is a data file that includes a durable marker DiskFileError if any part of the filename is not able to be validated. Return int representation of frag_index, or raise a DiskFileError if frag_index is not a whole number. frag_index a fragment archive index policy storage policy used to validate the index against Bases: BaseDiskFileReader Bases: BaseDiskFileWriter Finalize put by renaming the object data file to include a durable marker. We do this for EC policy because it requires a 2-phase put commit confirmation. timestamp object put timestamp, an instance of Timestamp DiskFileError if the diskfile frag_index has not been set (either during initialisation or a call to put()) The only difference between this method and the replication policy DiskFileWriter method is adding the frag index to the" }, { "data": "metadata dictionary of metadata to be associated with object Take whats in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid. partition_dir absolute path to partition dir containing hashes.pkl and hashes.invalid a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Extracts the policy for an object (based on the name of the objects directory) given the device-relative path to the object. Returns None in the event that the path is malformed in some way. The device-relative path is everything after the mount point; for example: 485dc017205a81df3af616d917c90179/1401811134.873649.data would have device-relative path: objects-5/30/179/485dc017205a81df3af616d917c90179/1401811134.873649.data obj_path device-relative path of an object, or the full path a BaseStoragePolicy or None Get the async dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. asyncpending or asyncpending-<N> as appropriate Get the data dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. objects or objects-<N> as appropriate Given the device path, policy, and partition, returns the full path to the partition Get the temp dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. tmp or tmp-<N> as appropriate Invalidates the hash for a suffix_dir in the partitions hashes file. suffix_dir absolute path to suffix dir whose hash needs invalidating Given a devices path (e.g. /srv/node), yield an AuditLocation for all objects stored under that directory for the given datadir (policy), if devicedirs isnt set. If devicedirs is set, only yield AuditLocation for the objects under the entries in device_dirs. The AuditLocation only knows the path to the hash directory, not to the .data file therein (if any). This is to avoid a double listdir(hash_dir); the DiskFile object will always do one, so we dont. devices parent directory of the devices to be audited datadir objects directory mount_check flag to check if a mount check should be performed on devices logger a logger object device_dirs a list of directories under devices to traverse auditor_type either ALL or ZBF In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it. The path to the device the corrupted file is on. The path to the file you want quarantined. path (str) of directory the file was moved to OSError re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename Read the existing hashes.pkl a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Helper function to read the pickled metadata from an object file. fd file descriptor or filename to load the metadata from addmissingchecksum if set and checksum is missing, add it dictionary of metadata Hard-links a file located in target_path using the second path newtargetpath. Creates intermediate directories if required. target_path current absolute filename newtargetpath new absolute filename for the hardlink ignore_missing if True then no exception is raised if the link could not be made because target_path did not exist, otherwise an OSError will be raised. OSError if the hard link could not be created, unless the intended hard link already exists or the target_path does not exist and must_exist if False. True if the link was created by the call to this method, False otherwise. Write hashes to hashes.pkl The updated key is added to hashes before it is" }, { "data": "Helper function to write pickled metadata for an object file. fd file descriptor or filename to write the metadata metadata metadata to write Bases: Daemon Replicate objects. Encapsulates most logic and data needed by the object replication process. Each call to .replicate() performs one replication pass. Its up to the caller to do this in a loop. Helper function for collect_jobs to build jobs for replication using replication style storage policy Check to see if the ring has been updated :param object_ring: the ring to check boolean indicating whether or not the ring has changed Returns a sorted list of jobs (dictionaries) that specify the partitions, nodes, etc to be rsynced. override_devices if set, only jobs on these devices will be returned override_partitions if set, only jobs on these partitions will be returned override_policies if set, only jobs in these storage policies will be returned Returns a set of all local devices in all replication-type storage policies. This is the device names, e.g. sdq or d1234 or something, not the full ring entries. For each worker yield a (possibly empty) dict of kwargs to pass along to the daemons run() method after fork. The length of elements returned from this method will determine the number of processes created. If the returned iterable is empty, the Strategy will fallback to run-inline strategy. once False if the worker(s) will be daemonized, True if the worker(s) will be run once kwargs plumbed through via command line argparser an iterable of dicts, each element represents the kwargs to be passed to a single workers run() method after fork. Loop that runs in the background during replication. It periodically logs progress. Check whether our set of local devices remains the same. If devices have been added or removed, then we return False here so that we can kill off any worker processes and then distribute the new set of local devices across a new set of workers so that all devices are, once again, being worked on. This function may also cause recon stats to be updated. False if any local devices have been added or removed, True otherwise Make sure the policys rings are loaded. policy the StoragePolicy instance appropriate ring object Override this to do something after running using multiple worker processes. This method is called in the parent process. This is probably only useful for run-once mode since there is no after running in run-forever mode. Run a replication pass High-level method that replicates a single partition that doesnt belong on this node. job a dict containing info about the partition to be replicated Uses rsync to implement the sync method. This was the first sync method in Swift. Override this to run forever Override this to run the script once Logs various stats for the currently running replication pass. Synchronize local suffix directories from a partition with a remote node. node the dev entry for the remote node to sync with job information about the partition being synced suffixes a list of suffixes which need to be pushed boolean and dictionary, boolean indicating success or failure High-level method that replicates a single partition. job a dict containing info about the partition to be replicated Bases: object Note the failure of one or more devices. failures a list of (ip, device-name) pairs that failed Bases: object Sends SSYNC requests to the object server. These requests are eventually handled by ssync_receiver and full documentation about the process is" }, { "data": "Establishes a connection and starts an SSYNC request with the object server. Closes down the connection to the object server once done with the SSYNC request. Handles the sender-side of the MISSING_CHECK step of a SSYNC request. Full documentation of this can be found at Receiver.missing_check(). Sends a DELETE subrequest with the given information. Sends a PUT subrequest for the url_path using the source df (DiskFile) and content_length. Handles the sender-side of the UPDATES step of an SSYNC request. Full documentation of this can be found at Receiver.updates(). Bases: BufferedHTTPConnection alias of SsyncBufferedHTTPResponse Bases: BufferedHTTPResponse, object Reads a line from the SSYNC response body. httplib has no readline and will block on read(x) until x is read, so we have to do the work ourselves. A bit of this is taken from Pythons httplib itself. Parse missing_check line parts to determine which parts of local diskfile were wanted by the receiver. The encoder for parts is encode_wanted() Returns a string representing the object hash, its data file timestamp, the delta forwards to its metafile and content-type timestamps, if non-zero, and its durability, in the form: <hash> <tsdata> [m:<hex delta to tsmeta>[,t:<hex delta to ts_ctype>] [,durable:False] The decoder for this line is decode_missing() Bases: object Handles incoming SSYNC requests to the object server. These requests come from the object-replicator daemon that uses ssync_sender. The number of concurrent SSYNC requests is restricted by use of a replication_semaphore and can be configured with the object-server.conf [object-server] replication_concurrency setting. An SSYNC request is really just an HTTP conduit for sender/receiver replication communication. The overall SSYNC request should always succeed, but it will contain multiple requests within its request and response bodies. This hack is done so that replication concurrency can be managed. The general process inside an SSYNC request is: Initialize the request: Basic request validation, mount check, acquire semaphore lock, etc.. Missing check: Sender sends the hashes and timestamps of the object information it can send, receiver sends back the hashes it wants (doesnt have or has an older timestamp). Updates: Sender sends the object information requested. Close down: Release semaphore lock, etc. Basic validation of request and mount check. This function will be called before attempting to acquire a replication semaphore lock, so contains only quick checks. Handles the receiver-side of the MISSING_CHECK step of a SSYNC request. Receives a list of hashes and timestamps of object information the sender can provide and responds with a list of hashes desired, either because theyre missing or have an older timestamp locally. The process is generally: Sender sends :MISSING_CHECK: START and begins sending hash timestamp lines. Receiver gets :MISSING_CHECK: START and begins reading the hash timestamp lines, collecting the hashes of those it desires. Sender sends :MISSING_CHECK: END. Receiver gets :MISSING_CHECK: END, responds with :MISSING_CHECK: START, followed by the list of <wanted_hash> specifiers it collected as being wanted (one per line), :MISSING_CHECK: END, and flushes any buffers. Each <wanted_hash> specifier has the form <hash>[ <parts>] where <parts> is a string containing characters d and/or m indicating that only data or meta part of object respectively is required to be syncd. Sender gets :MISSING_CHECK: START and reads the list of hashes desired by the receiver until reading :MISSING_CHECK: END. The collection and then response is so the sender doesnt have to read while it writes to ensure network buffers dont fill up and block everything. Handles the UPDATES step of an SSYNC request. Receives a set of PUT and DELETE subrequests that will be routed to the object server itself for" }, { "data": "These contain the information requested by the MISSING_CHECK step. The PUT and DELETE subrequests are formatted pretty much exactly like regular HTTP requests, excepting the HTTP version on the first request line. The process is generally: Sender sends :UPDATES: START and begins sending the PUT and DELETE subrequests. Receiver gets :UPDATES: START and begins routing the subrequests to the object server. Sender sends :UPDATES: END. Receiver gets :UPDATES: END and sends :UPDATES: START and :UPDATES: END (assuming no errors). Sender gets :UPDATES: START and :UPDATES: END. If too many subrequests fail, as configured by replicationfailurethreshold and replicationfailureratio, the receiver will hang up the request early so as to not waste any more time. At step 4, the receiver will send back an error if there were any failures (that didnt cause a hangup due to the above thresholds) so the sender knows the whole was not entirely a success. This is so the sender knows if it can remove an out of place partition, for example. Bases: Exception Parse a string of the form generated by encode_missing() and return a dict with keys objecthash, tsdata, tsmeta, tsctype, durable. The encoder for this line is encode_missing() Compare a remote and local results and generate a wanted line. remote a dict, with tsdata and tsmeta keys in the form returned by decode_missing() local a dict, possibly empty, with tsdata and tsmeta keys in the form returned Receiver.checklocal() The decoder for this line is decode_wanted() Bases: Daemon Reconstruct objects using erasure code. And also rebalance EC Fragment Archive objects off handoff nodes. Encapsulates most logic and data needed by the object reconstruction process. Each call to .reconstruct() performs one pass. Its up to the caller to do this in a loop. Aggregate per-disk rcache updates from child workers. Helper function for collect_jobs to build jobs for reconstruction using EC style storage policy N.B. If this function ever returns an empty list of jobs the entire partition will be deleted. Check to see if the ring has been updated object_ring the ring to check boolean indicating whether or not the ring has changed Helper for getting partitions in the top level reconstructor In handoffs_only mode primary partitions will not be included in the returned (possibly empty) list. For EC we can potentially revert only some of a partition so well delete reverted objects here. Note that we delete the fragment index of the file we sent to the remote node. job the job being processed objects a dict of objects to be deleted, each entry maps hash=>timestamp In testing, the pool.waitall() call very occasionally failed to return. This is an attempt to make sure the reconstructor finishes its reconstruction pass in some eventuality. Add stats for this workers run to recon cache. When in worker mode (perdiskstats == True) this workers stats are added per device instead of in the top level keys (aggregation is serialized in the parent process). total the runtime of cycle in minutes override_devices (optional) list of device that are being reconstructed Returns a set of all local devices in all EC policies. Compare the local suffix hashes with the remote suffix hashes for the given local and remote fragment indexes. Return those suffixes which should be" }, { "data": "localsuff the local suffix hashes (from get_hashes) local_index the local fragment index for the job remote_suff the remote suffix hashes (from remote REPLICATE request) remote_index the remote fragment index for the job a list of strings, the suffix dirs to sync Take the set of all local devices for this node from all the EC policies rings, and distribute them evenly into the number of workers to be spawned according to the configured worker count. If devices is given in kwargs then distribute only those devices. once False if the worker(s) will be daemonized, True if the worker(s) will be run once kwargs optional overrides from the command line Loop that runs in the background during reconstruction. It periodically logs progress. Check whether rings have changed, and maybe do a recon update. False if any ec ring has changed Utility function that kills all coroutines currently running. Make sure the policys rings are loaded. policy the StoragePolicy instance appropriate ring object Turn a set of connections from backend object servers into a generator that yields up the rebuilt fragment archive for frag_index. Override this to do something after running using multiple worker processes. This method is called in the parent process. This is probably only useful for run-once mode since there is no after running in run-forever mode. Sync the local partition with the remote node(s) according to the parameters of the job. For primary nodes, the SYNC job type will define both left and right hand sync_to nodes to ssync with as defined by this primary nodes index in the node list based on the fragment index found in the partition. For non-primary nodes (either handoff revert, or rebalance) the REVERT job will define a single node in sync_to which is the proper/new home for the fragment index. N.B. ring rebalancing can be time consuming and handoff nodes fragment indexes do not have a stable order, its possible to have more than one REVERT job for a partition, and in some rare failure conditions there may even also be a SYNC job for the same partition - but each one will be processed separately because each job will define a separate list of node(s) to sync_to. job the job dict, with the keys defined in getjob_info Run a reconstruction pass Reconstructs a fragment archive - this method is called from ssync after a remote node responds that is missing this object - the local diskfile is opened to provide metadata - but to reconstruct the missing fragment archive we must connect to multiple object servers. job job from ssync_sender. node node to which were rebuilding. df an instance of BaseDiskFile. a DiskFile like class for use by ssync. DiskFileQuarantined if the fragment archive cannot be reconstructed and has as a result been quarantined. DiskFileError if the fragment archive cannot be reconstructed. Override this to run forever Override this to run the script once Logs various stats for the currently running reconstruction pass. Bases: object This class wraps the reconstructed fragment archive data and metadata in the DiskFile interface for ssync. Bases: object Encapsulates fragment GET response data related to a single timestamp. Object Server for Swift Bases: bytes Eventlet wont send headers until its accumulated at least eventlet.wsgi.MINIMUMCHUNKSIZE bytes or the app iter is exhausted. If we want to send the response body behind Eventlets back, perhaps with some zero-copy wizardry, then we have to unclog the plumbing in eventlet.wsgi to force the headers out, so we use an EventletPlungerString to empty out all of Eventlets buffers. Bases: BaseStorageServer Implements the WSGI application for the Swift Object Server. Handle HTTP DELETE requests for the Swift Object Server. Handle HTTP GET requests for the Swift Object Server. Handle HTTP HEAD requests for the Swift Object" }, { "data": "Handle HTTP POST requests for the Swift Object Server. Handle HTTP PUT requests for the Swift Object Server. Handle REPLICATE requests for the Swift Object Server. This is used by the object replicator to get hashes for directories. Note that the name REPLICATE is preserved for historical reasons as this verb really just returns the hashes information for the specified parameters and is used, for example, by both replication and EC. Sends or saves an async update. op operation performed (ex: PUT, or DELETE) account account name for the object container container name for the object obj object name host host that the container is on partition partition that the container is on contdevice device name that the container is on headers_out dictionary of headers to send in the container request objdevice device name that the object is in policy the associated BaseStoragePolicy instance loggerthreadlocals The thread local values to be set on the self.logger to retain transaction logging information. container_path optional path in the form <account/container> to which the update should be sent. If given this path will be used instead of constructing a path from the account and container params. Update the container when objects are updated. op operation performed (ex: PUT, or DELETE) account account name for the object container container name for the object obj object name request the original request object driving the update headers_out dictionary of headers to send in the container request(s) objdevice device name that the object is in policy the BaseStoragePolicy instance Update the expiring objects container when objects are updated. op operation performed (ex: PUT, or DELETE) delete_at scheduled delete in UNIX seconds, int account account name for the object container container name for the object obj object name request the original request driving the update objdevice device name that the object is in policy the BaseStoragePolicy instance (used for tmp dir) Utility method for instantiating a DiskFile object supporting a given REST API. An implementation of the object server that wants to use a different DiskFile class would simply over-ride this method to provide that behavior. Implementation specific setup. This method is called at the very end by the constructor to allow a specific implementation to modify existing attributes or add its own attributes. conf WSGI configuration parameter paste.deploy app factory for creating WSGI object server apps Read and discard any bytes from file_like. file_like file-like object to read from read_size how big a chunk to read at a time timeout how long to wait for a read (use None for no timeout) ChunkReadTimeout if no chunk was read in time Split and validate path for an object. request a swob request a tuple of path parts and storage policy Callback for swift.common.wsgi.runwsgi during the globalconf creation so that we can add our replication_semaphore, used to limit the number of concurrent SSYNC_REQUESTS across all workers. preloadedappconf The preloaded conf for the WSGI app. This conf instance will go away, so just read from it, dont write. global_conf The global conf that will eventually be passed to the app_factory function later. This conf is created before the worker subprocesses are forked, so can be useful to set up semaphores, shared memory, etc. Bases: object Wrap an iterator to rate-limit updates on a per-bucket basis, where updates are mapped to buckets by hashing their destination path. If an update is rate-limited then it is placed on a deferral queue and may be sent later if the wrapped iterator is exhausted before the drain_until time is" }, { "data": "The deferral queue has constrained size and once the queue is full updates are evicted using a first-in-first-out policy. This policy is used because updates on the queue may have been made obsolete by newer updates written to disk, and this is more likely for updates that have been on the queue longest. The iterator increments stats as follows: The deferrals stat is incremented for each update that is rate-limited. Note that a individual update is rate-limited at most once. The skips stat is incremented for each rate-limited update that is not eventually yielded. This includes updates that are evicted from the deferral queue and all updates that remain in the deferral queue when drain_until time is reached and the iterator terminates. The drains stat is incremented for each rate-limited update that is eventually yielded. Consequently, when this iterator terminates, the sum of skips and drains is equal to the number of deferrals. updateiterable an asyncpending update iterable logger a logger instance stats a SweepStats instance num_buckets number of buckets to divide container hashes into, the more buckets total the less containers to a bucket (once a busy container slows down a bucket the whole bucket starts deferring) maxelementspergroupper_second tunable, when deferring kicks in maxdeferredelements maximum number of deferred elements before skipping starts. Each bucket may defer updates, but once the total number of deferred updates summed across all buckets reaches this value then all buckets will skip subsequent updates. drain_until time at which any remaining deferred elements must be skipped and the iterator stops. Once the wrapped iterator has been exhausted, this iterator will drain deferred elements from its buckets until either all buckets have drained or this time is reached. Bases: Daemon Update object information in container listings. Get the container ring. Load it, if it hasnt been yet. If there are async pendings on the device, walk each one and update. device path to device Perform the object update to the container node node dictionary from the container ring part partition that holds the container op operation performed (ex: PUT or DELETE) obj object name being updated headers_out headers to send with the update a tuple of (success, node_id, redirect) where success is True if the update succeeded, node_id is the_id of the node updated and redirect is either None or a tuple of (a path, a timestamp string). Process the object information to be updated and update. update_path path to pickled object update file device path to device policy storage policy of object update update the un-pickled update data kwargs un-used keys from update_ctx Run the updater continuously. Run the updater once. Bases: EventletRateLimiter Extends EventletRateLimiter to also maintain a deque of items that have been deferred due to rate-limiting, and to provide a comparator for sorting instanced by readiness. Bases: object Stats bucket for an update sweep A measure of the rate at which updates are being rate-limited is: ``` deferrals / (deferrals + successes + failures - drains) ``` A measure of the rate at which updates are not being sent during a sweep is: ``` skips / (skips + successes + failures) ``` Split the account and container parts out of the async update data. N.B. updates to shards set the container_path key while the account and container keys are always the root. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "object.html#module-swift.obj.auditor.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Database code for Swift Timeout for trying to connect to a DB Whether calls will be made to preallocate disk space for database files. Bases: DatabaseError More friendly error messages for DB Errors. Bases: object Encapsulates working with a database. Mark the DB as deleted timestamp internalized delete timestamp Check if the broker abstraction contains any undeleted records. Use with the with statement; returns a database connection. Get a list of objects in the database between start and end. start start ROWID count number to get list of objects between start and end Get information about the DB required for replication. dict containing keys from getinfo plus maxrow and metadata count and metadata is the raw string. Gets the most recent sync point for a server from the sync table. id remote ID to get the sync_point for incoming if True, get the last incoming sync, otherwise get the last outgoing sync the sync point, or -1 if the id doesnt exist. Get a serialized copy of the sync table. incoming if True, get the last incoming sync, otherwise get the last outgoing sync includetimestamp If True include the updatedat timestamp list of {remoteid, syncpoint} or {remoteid, syncpoint, updated_at} if include_timestamp is True. Create the DB The storagepolicyindex is passed through to the subclasss _initialize method. It is ignored by AccountBroker. put_timestamp internalized timestamp of initial PUT request storagepolicyindex only required for containers Check if the DB is considered to be deleted. True if the DB is considered to be deleted, False otherwise Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age. Use with the with statement; locks a database. Turn this db record dict into the format this service uses for pending pickles. Save :param:item_list to the database. Merge a list of sync points with the incoming sync table. sync_points list of sync points where a sync point is a dict of {syncpoint, remoteid} incoming if True, get the last incoming sync, otherwise get the last outgoing sync Used in replication to handle updating timestamps. created_at create timestamp put_timestamp put timestamp delete_timestamp delete timestamp Returns the metadata dict for the database. The metadata dict values are tuples of (value, timestamp) where the timestamp indicates when that key was set to that value. Re-id the database. This should be called after an rsync. remote_id the ID of the remote database being rsynced in Checks the exception info to see if it indicates a quarantine situation (malformed or corrupted database). If not, the original exception will be reraised. If so, the database will be quarantined and a new" }, { "data": "will be raised indicating the action taken. Put a record into the DB. If the DB has an associated pending file with space then the record is appended to that file and a commit to the DB is deferred. If its pending file is full then the record will be committed immediately. record a record to be added to the DB. DatabaseConnectionError if the DB file does not exist or if skip_commits is True. LockTimeout if a timeout occurs while waiting to take a lock to write to the pending file. The database will be quarantined and a sqlite3.DatabaseError will be raised indicating the action taken. Delete reclaimable rows and metadata from the db. By default this method will delete rows from the dbcontainstype table that are marked deleted and whose created_at timestamp is < agetimestamp, and deletes rows from incomingsync and outgoing_sync where the updatedat timestamp is < synctimestamp. In addition, this calls the reclaimmetadata() method. Subclasses may reclaim other items by overriding _reclaim(). agetimestamp max createdat timestamp of object rows to delete synctimestamp max updateat timestamp of sync rows to delete Updates the metadata dict for the database. The metadata dict values are tuples of (value, timestamp) where the timestamp indicates when that key was set to that value. Key/values will only be overwritten if the timestamp is newer. To delete a key, set its value to (, timestamp). These empty keys will eventually be removed by reclaim() Update the put_timestamp. Only modifies it if it is greater than the current timestamp. timestamp internalized put timestamp Update the statuschangedat field in the stat table. Only modifies statuschangedat if the timestamp is greater than the current statuschangedat timestamp. timestamp internalized timestamp Use with with statement; updates timeout within the block. Validates that metadata falls within acceptable limits. metadata to be validated HTTPBadRequest if MAXMETACOUNT or MAXMETAOVERALL_SIZE is exceeded, or if metadata contains non-UTF-8 data Bases: DatabaseError More friendly error messages for DB Errors. Bases: Connection SQLite DB Connection handler that plays well with eventlet. Commit any pending transaction to the database. If there is no open transaction, this method is a no-op. Return a cursor for the connection. Executes an SQL statement. Bases: Cursor SQLite Cursor handler that plays well with eventlet. Executes an SQL statement. Pickle protocol to use Whether calls will be made to log queries (py3 only) Bases: object Encapsulates reclamation of deleted rows in a database. Return the number of remaining tombstones newer than age_timestamp. Executes the reclaim method if it has not already been called on this instance. The number of tombstones in the broker that are newer than" }, { "data": "Perform reclaim of deleted rows older than age_timestamp. Each entry in the account and container databases is XORed by the 128-bit hash on insert or delete. This serves as a rolling, order-independent hash of the contents. (check + XOR) old hex representation of the current DB hash name name of the object or container being inserted timestamp internalized timestamp of the new record a hex representation of the new hash value This should only be used when you need a real dict, i.e. when youre going to serialize the results. Returns a properly configured SQLite database connection. path path to DB timeout timeout for connection okaytocreate if True, create the DB if it doesnt exist DB connection object Weve cargo culted our consumers to be tolerant of various expressions of zero in our databases for backwards compatibility with less disciplined producers. Bases: BufferedHTTPConnection Helper to simplify REPLICATEing to a remote server. Make an HTTP REPLICATE request args list of json-encodable objects bufferedhttp response object Bases: Daemon Implements the logic for directing db replication. Cleanup non primary database from disk if needed. broker the broker for the database were replicating orig_info snapshot of the broker replication info dict taken before replication responses a list of boolean success values for each replication request to other nodes returns False if deletion of the database was attempted but unsuccessful, otherwise returns True. Extract the device name from an object path. Returns UNKNOWN if the path could not be extracted successfully for some reason. object_file the path to a database file. Replicate dbs under the given root in an infinite loop. Run a replication pass once. Bases: object Handle Replication RPC calls. TODO(redbo): document please :) True if the directory name is a valid partition number, False otherwise. In the case that a corrupt file is found, move it to a quarantined area to allow replication to fix it. object_file path to corrupt file server_type type of file that is corrupt (container or account) Generator to walk the data dirs in a round robin manner, evenly hitting each device on the system, and yielding any .db files found (in their proper places). The partitions within each data dir are walked randomly, however. datadirs a list of tuples of (path, context, partition_filter) to walk. The context may be any object; the context is not used by this function but is included with each yielded tuple. A generator of (partition, pathtodb_file, context) Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "object.html#module-swift.obj.server.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Bases: DatabaseAuditor Audit containers. alias of ContainerBroker Pluggable Back-ends for Container Server Bases: DatabaseBroker Encapsulates working with a container database. Note that this may involve multiple on-disk DB files if the container becomes sharded: dbfile is the path to the legacy container DB name, i.e. <hash>.db. This file should exist for an initialised broker that has never been sharded, but will not exist once a container has been sharded. db_files is a list of existing db files for the broker. This list should have at least one entry for an initialised broker, and should have two entries while a broker is in SHARDING state. db_file is the path to whichever db is currently authoritative for the container. Depending on the containers state, this may not be the same as the dbfile argument given to init_(), unless forcedbfile is True in which case db_file is always equal to the dbfile argument given to init_(). pendingfile is always equal to db_file extended with .pending, i.e. <hash>.db.pending. Create a ContainerBroker instance. If the db doesnt exist, initialize the db file. device_path device path part partition number account account name string container container name string logger a logger instance epoch a timestamp to include in the db filename put_timestamp initial timestamp if broker needs to be initialized storagepolicyindex the storage policy index a tuple of (broker, initialized) where broker is an instance of swift.container.backend.ContainerBroker and initialized is True if the db file was initialized, False otherwise. Create the container_info table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. Also creates the container_stat view. conn DB connection object put_timestamp put timestamp storagepolicyindex storage policy index Create the object table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object Create policy_stat table. conn DB connection object storagepolicyindex the policy_index the container is being created with Create the shard_range table which is specific to the container DB. conn DB connection object Get the path to the primary db file for this broker. This is typically the db file for the most recent sharding epoch. However, if no db files exist on disk, or if forcedbfile was True when the broker was constructed, then the primary db file is the file passed to the broker constructor. A path to a db file; the file does not necessarily exist. Gets the cached list of valid db files that exist on disk for this broker. reloaddbfiles(). A list of paths to db files ordered by ascending epoch; the list may be empty. Mark an object deleted. name object name to be deleted timestamp timestamp when the object was marked as deleted storagepolicyindex the storage policy index for the object Check if container DB is empty. This method uses more stringent checks on object count than is_deleted(): this method checks that there are no objects in any policy; if the container is in the process of sharding then both fresh and retiring databases are checked to be empty; if a root container has shard ranges then they are checked to be empty. True if the database has no active objects, False otherwise Updates this brokers own shard range with the given epoch, sets its state to SHARDING and persists it in the" }, { "data": "epoch a Timestamp the brokers updated own shard range. Scans the container db for shard ranges. Scanning will start at the upper bound of the any existing_ranges that are given, otherwise at ShardRange.MIN. Scanning will stop when limit shard ranges have been found or when no more shard ranges can be found. In the latter case, the upper bound of the final shard range will be equal to the upper bound of the container namespace. This method does not modify the state of the db; callers are responsible for persisting any shard range data in the db. shard_size the size of each shard range limit the maximum number of shard points to be found; a negative value (default) implies no limit. existing_ranges an optional list of existing ShardRanges; if given, this list should be sorted in order of upper bounds; the scan for new shard ranges will start at the upper bound of the last existing ShardRange. minimumshardsize Minimum size of the final shard range. If this is greater than one then the final shard range may be extended to more than shard_size in order to avoid a further shard range with less minimumshardsize rows. a tuple; the first value in the tuple is a list of dicts each having keys {index, lower, upper, object_count} in order of ascending upper; the second value in the tuple is a boolean which is True if the last shard range has been found, False otherwise. Returns a list of all shard range data, including own shard range and deleted shard ranges. A list of dict representations of a ShardRange. Return a list of brokers for component dbs. The list has two entries while the db state is sharding: the first entry is a broker for the retiring db with skip_commits set to True; the second entry is a broker for the fresh db with skip_commits set to False. For any other db state the list has one entry. a list of ContainerBroker Returns the current state of on disk db files. Get global data for the container. dict with keys: account, container, created_at, puttimestamp, deletetimestamp, status, statuschangedat, objectcount, bytesused, reportedputtimestamp, reporteddeletetimestamp, reportedobjectcount, reportedbytesused, hash, id, xcontainersync_point1, xcontainersyncpoint2, and storagepolicy_index, db_state. Get the is_deleted status and info for the container. a tuple, in the form (info, is_deleted) info is a dict as returned by getinfo and isdeleted is a boolean. Get a list of objects which are in a storage policy different from the containers storage policy. start last reconciler sync point count maximum number of entries to get list of dicts with keys: name, created_at, size, contenttype, etag, storagepolicy_index Returns a list of persisted namespaces per input parameters. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored. reverse reverse the result order. states if specified, restricts the returned list to namespaces that have one of the given states; should be a list of" }, { "data": "fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. a list of Namespace objects. Returns a list of objects, including deleted objects, in all policies. Each object in the list is described by a dict with keys {name, createdat, size, contenttype, etag, deleted, storagepolicyindex}. limit maximum number of entries to get marker if set, objects with names less than or equal to this value will not be included in the list. end_marker if set, objects with names greater than or equal to this value will not be included in the list. include_deleted if True, include only deleted objects; if False, include only undeleted objects; otherwise (default), include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. a list of dicts, each describing an object. Returns a shard range representing this brokers own shard range. If no such range has been persisted in the brokers shard ranges table then a default shard range representing the entire namespace will be returned. The objectcount and bytesused of the returned shard range are not guaranteed to be up-to-date with the current object stats for this broker. Callers that require up-to-date stats should use the get_info method. no_default if True and the brokers own shard range is not found in the shard ranges table then None is returned, otherwise a default shard range is returned. an instance of ShardRange Get information about the DB required for replication. dict containing keys from getinfo plus maxrow and metadata count and metadata is the raw string. Returns a list of persisted shard ranges. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored, but other constraints are applied (e.g. exclude_others and include_deleted). reverse reverse the result order. include_deleted include items that have the delete marker set. states if specified, restricts the returned list to shard ranges that have one of the given states; should be a list of ints. include_own boolean that governs whether the row whose name matches the brokers path is included in the returned list. If True, that row is included unless it is excluded by other constraints (e.g. marker, end_marker, includes). If False, that row is not included. Default is False. exclude_others boolean that governs whether the rows whose names do not match the brokers path are included in the returned list. If True, those rows are not included, otherwise they are included. Default is False. fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. fill_gaps is ignored if includes is" }, { "data": "a list of instances of swift.common.utils.ShardRange. Get the aggregate object stats for all shard ranges in states ACTIVE, SHARDING or SHRINKING. a dict with keys {bytesused, objectcount} Returns sharding specific info from the brokers metadata. key if given the value stored under key in the sharding info will be returned. either a dict of sharding info or the value stored under key in that dict. Returns sharding specific info from the brokers metadata with timestamps. key if given the value stored under key in the sharding info will be returned. a dict of sharding info with their timestamps. This function tells if there is any shard range other than the brokers own shard range, that is not marked as deleted. A boolean value as described above. Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age. Returns True if this container is a root container, False otherwise. A root container is a container that is not a shard of another container. Get a list of objects sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix. limit maximum number of entries to get marker marker query end_marker end marker query prefix prefix query delimiter delimiter for query path if defined, will set the prefix and delimiter based on the path storagepolicyindex storage policy index for query reverse reverse the result order. include_deleted if True, include only deleted objects; if False (default), include only undeleted objects; otherwise, include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. transform_func an optional function that if given will be called for each object to get a transformed version of the object to include in the listing; should have same signature as transformrecord(); defaults to transformrecord(). all_policies if True, include objects for all storage policies ignoring any value given for storagepolicyindex allow_reserved exclude names with reserved-byte by default list of tuples of (name, createdat, size, contenttype, etag, deleted) Turn this db record dict into the format this service uses for pending pickles. Merge items into the object table. itemlist list of dictionaries of {name, createdat, size, content_type, etag, deleted, storagepolicyindex, ctype_timestamp, meta_timestamp} source if defined, update incoming_sync with the source Merge shard ranges into the shard range table. shard_ranges a shard range or a list of shard ranges; each shard range should be an instance of ShardRange or a dict representation of a shard range having SHARDRANGEKEYS. Creates an object in the DB with its metadata. name object name to be created timestamp timestamp of when the object was created size object size content_type object content-type etag object etag deleted if True, marks the object as deleted and sets the deleted_at timestamp to timestamp storagepolicyindex the storage policy index for the object ctypetimestamp timestamp of when contenttype was last updated meta_timestamp timestamp of when metadata was last updated Reloads the cached list of valid on disk db files for this broker. Removes object records in the given namespace range from the object table. Note that objects are removed regardless of their" }, { "data": "lower defines the lower bound of object names that will be removed; names greater than this value will be removed; names less than or equal to this value will not be removed. upper defines the upper bound of object names that will be removed; names less than or equal to this value will be removed; names greater than this value will not be removed. The empty string is interpreted as there being no upper bound. maxrow if specified only rows less than or equal to maxrow will be removed Update reported stats, available with containers get_info. puttimestamp puttimestamp to update deletetimestamp deletetimestamp to update objectcount objectcount to update bytesused bytesused to update Given a list of values each of which may be the name of a state, the number of a state, or an alias, return the set of state numbers described by the list. The following alias values are supported: listing maps to all states that are considered valid when listing objects; updating maps to all states that are considered valid for redirecting an object update; auditing maps to all states that are considered valid for a shard container that is updating its own shard range table from a root (this currently maps to all states except FOUND). states a list of values each of which may be the name of a state, the number of a state, or an alias a set of integer state numbers, or None if no states are given ValueError if any value in the given list is neither a valid state nor a valid alias Unlinks the brokers retiring DB file. True if the retiring DB was successfully unlinked, False otherwise. Creates and initializes a fresh DB file in preparation for sharding a retiring DB. The brokers own shard range must have an epoch timestamp for this method to succeed. True if the fresh DB was successfully created, False otherwise. Updates the brokers metadata stored under the given key prefixed with a sharding specific namespace. key metadata key in the sharding metadata namespace. value metadata value Update the containerstat policyindex and statuschangedat. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated, False otherwise. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated but has not yet completed sharding, False otherwise. Compares sharddata with existing and updates sharddata with any items of existing that take precedence over the corresponding item in shard_data. shard_data a dict representation of shard range that may be modified by this method. existing a dict representation of shard range. True if shard data has any item(s) that are considered to take precedence over the corresponding item in existing Compares new and existing shard ranges, updating the new shard ranges with any more recent state from the existing, and returns shard ranges sorted into those that need adding because they contain new or updated state and those that need deleting because their state has been superseded. newshardranges a list of dicts, each of which represents a shard range. existingshardranges a dict mapping shard range names to dicts representing a shard range. a tuple (toadd, todelete); to_add is a list of dicts, each of which represents a shard range that is to be added to the existing shard ranges; to_delete is a set of shard range names that are to be" }, { "data": "Compare the data and meta related timestamps of a new object item with the timestamps of an existing object record, and update the new item with data and/or meta related attributes from the existing record if their timestamps are newer. The multiple timestamps are encoded into a single string for storing in the created_at column of the objects db table. new_item A dict of object update attributes existing A dict of existing object attributes True if any attributes of the new item dict were found to be newer than the existing and therefore not updated, otherwise False implying that the updated item is equal to the existing. Bases: Replicator alias of ContainerBroker Cleanup non primary database from disk if needed. broker the broker for the database were replicating orig_info snapshot of the broker replication info dict taken before replication responses a list of boolean success values for each replication request to other nodes returns False if deletion of the database was attempted but unsuccessful, otherwise returns True. Ensure that reconciler databases are only cleaned up at the end of the replication run. Look for object rows for objects updates in the wrong storage policy in broker with a ROWID greater than the rowid given as point. broker the container broker with misplaced objects point the last verified reconcilersyncpoint the last successful enqueued rowid Add queue entries for rows in item_list to the local reconciler container database. container the name of the reconciler container item_list the list of rows to enqueue True if successfully enqueued Find a device in the ring that is on this node on which to place a partition. Preference is given to a device that is a primary location for the partition. If no such device is found then a local device with weight is chosen, and failing that any local device. part a partition a node entry from the ring Get a local instance of the reconciler container broker that is appropriate to enqueue the given timestamp. timestamp the timestamp of the row to be enqueued a local reconciler broker Ensure any items merged to reconciler containers during replication are pushed out to correct nodes and any reconciler containers that do not belong on this node are removed. Run a replication pass once. Bases: ReplicatorRpc If broker has ownshardrange with an epoch then filter out an ownshardrange without an epoch, and log a warning about it. shards a list of candidate ShardRanges to merge broker a ContainerBroker logger a logger source string to log as source of shards a list of ShardRanges to actually merge Bases: BaseStorageServer WSGI Controller for the container server. Handle HTTP DELETE request. Handle HTTP GET request. The body of the response to a successful GET request contains a listing of either objects or shard ranges. The exact content of the listing is determined by a combination of request headers and query string parameters, as follows: The type of the listing is determined by the X-Backend-Record-Type header. If this header has value shard then the response body will be a list of shard ranges; if this header has value auto, and the container state is sharding or sharded, then the listing will be a list of shard ranges; otherwise the response body will be a list of objects. Both shard range and object listings may be filtered according to the constraints described" }, { "data": "However, the X-Backend-Ignore-Shard-Name-Filter header may be used to override the application of the marker, end_marker, includes and reverse parameters to shard range listings. These parameters will be ignored if the header has the value sharded and the current db sharding state is also sharded. Note that this header does not override the states constraint on shard range listings. The order of both shard range and object listings may be reversed by using a reverse query string parameter with a value in swift.common.utils.TRUE_VALUES. Both shard range and object listings may be constrained to a name range by the marker and end_marker query string parameters. Object listings will only contain objects whose names are greater than any marker value and less than any end_marker value. Shard range listings will only contain shard ranges whose namespace is greater than or includes any marker value and is less than or includes any end_marker value. Shard range listings may also be constrained by an includes query string parameter. If this parameter is present the listing will only contain shard ranges whose namespace includes the value of the parameter; any marker or end_marker parameters are ignored The length of an object listing may be constrained by the limit parameter. Object listings may also be constrained by prefix, delimiter and path query string parameters. Shard range listings will include deleted shard ranges if and only if the X-Backend-Include-Deleted header value is one of swift.common.utils.TRUE_VALUES. Object listings never include deleted objects. Shard range listings may be constrained to include only shard ranges whose state is specified by a query string states parameter. If present, the states parameter should be a comma separated list of either the string or integer representation of STATES. Alias values may be used in a states parameter value. The listing alias will cause the listing to include all shard ranges in a state suitable for contributing to an object listing. The updating alias will cause the listing to include all shard ranges in a state suitable to accept an object update. If either of these aliases is used then the shard range listing will if necessary be extended with a synthesised filler range in order to satisfy the requested name range when insufficient actual shard ranges are found. Any filler shard range will cover the otherwise uncovered tail of the requested name range and will point back to the same container. The auditing alias will cause the listing to include all shard ranges in a state useful to the sharder while auditing a shard container. This alias will not cause a filler range to be added, but will cause the containers own shard range to be included in the listing. For now, auditing is only supported when X-Backend-Record-Shard-Format is full. Shard range listings can be simplified to include only Namespace only attributes (name, lower and upper) if the caller send the header X-Backend-Record-Shard-Format with value namespace as a hint that it would prefer namespaces. If this header doesnt exist or the value is full, the listings will default to include all attributes of shard ranges. But if params has includes/marker/end_marker then the response will be full shard ranges, regardless the header of X-Backend-Record-Shard-Format. The response header X-Backend-Record-Type will tell the user what type it gets back. Listings are not normally returned from a deleted container. However, the X-Backend-Override-Deleted header may be used with a value in swift.common.utils.TRUE_VALUES to force a shard range listing to be returned from a deleted container whose DB file still" }, { "data": "req an instance of swift.common.swob.Request an instance of swift.common.swob.Response Returns a list of objects in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Returns a list of persisted shard ranges or namespaces in response. req swob.Request object broker container DB broker object container container name params the request params. info the global info for the container isdeleted the isdeleted status for the container. outcontenttype content type as a string. an instance of swift.common.swob.Response Handle HTTP HEAD request. Handle HTTP POST request. A POST request will update the containers put_timestamp, unless it has an X-Backend-No-Timestamp-Update header with a truthy value. req an instance of Request. Handle HTTP PUT request. Update or create container. Put object into container. Put shards into container. Handle HTTP REPLICATE request (json-encoded RPC calls for replication.) Handle HTTP UPDATE request (merge_items RPCs coming from the proxy.) Update the account server(s) with latest container info. req swob.Request object account account name container container name broker container DB broker object if all the account requests return a 404 error code, HTTPNotFound response object, if the account cannot be updated due to a malformed header, an HTTPBadRequest response object, otherwise None. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf Validate that the index supplied maps to a policy. policy index from request, or None if not present HTTPBadRequest if the supplied index is bogus ContainerSyncCluster instance for validating sync-to values. Perform mutation to container listing records that are common to all serialization formats, and returns it as a dict. Converts created time to iso timestamp. Replaces size with swift_bytes content type parameter. record object entry record modified record Return the shard_range database record as a dict, the keys will depend on the database fields provided in the record. record shard entry record, either ShardRange or Namespace. shardrecordfull boolean, when true the timestamp field is added as last_modified in iso format. dict suitable for listing responses paste.deploy app factory for creating WSGI container server apps Convert container info dict to headers. Split and validate path for a container. req a swob request a tuple of path parts as strings Split and validate path for an object. req a swob request a tuple of path parts as strings Bases: Daemon Move objects that are in the wrong storage policy. Validate source object will satisfy the misplaced object queue entry and move to destination. qpolicyindex the policy_index for the source object account the account name of the misplaced object container the container name of the misplaced object obj the name of the misplaced object q_ts the timestamp of the misplaced object path the full path of the misplaced object for logging containerpolicyindex the policy_index of the destination source_ts the timestamp of the source object sourceobjstatus the HTTP status source object request sourceobjinfo the HTTP headers of the source object request sourceobjiter the body iter of the source object request Issue a DELETE request against the destination to match the misplaced DELETE against the source. Dump stats to logger, noop when stats have been already been logged in the last minute. Issue a delete object request to the container for the misplaced object queue" }, { "data": "container the misplaced objects container obj the name of the misplaced object q_ts the timestamp of the misplaced object q_record the timestamp of the queue entry N.B. qts will normally be the same time as qrecord except when an object was manually re-enqued. Process an entry and remove from queue on success. q_container the queue container qentry the rawobj name from the q_container queue_item a parsed entry from the queue Main entry point for concurrent processing of misplaced objects. Iterate over all queue entries and delegate processing to spawned workers in the pool. Process a possibly misplaced object write request. Determine correct destination storage policy by checking with primary containers. Check source and destination, copying or deleting into destination and cleaning up the source as needed. This method wraps reconcileobject for exception handling. info a queue entry dict True to indicate the request is fully processed successfully, otherwise False. Override this to run forever Process every entry in the queue. Check if a given entry should be handled by this process. container the queue container queue_item an entry from the queue Update stats tracking for metric and emit log message. Issue a delete object request to the given storage_policy. account the account name container the container name obj the object name timestamp the timestamp of the object to delete policy_index the policy index to direct the request path the path to be used for logging Add an object to the container reconcilers queue. This will cause the container reconciler to move it from its current storage policy index to the correct storage policy index. container_ring container ring account the misplaced objects account container the misplaced objects container obj the misplaced object objpolicyindex the policy index where the misplaced object currently is obj_timestamp the misplaced objects X-Timestamp. We need this to ensure that the reconciler doesnt overwrite a newer object with an older one. op the method of the operation (DELETE or PUT) force over-write queue entries newer than obj_timestamp conn_timeout max time to wait for connection to container server response_timeout max time to wait for response from container server .misplaced_object container name, False on failure. Success means a majority of containers got the update. You have to squint to see it, but the general strategy is just: return the newest (of the recreated) return the oldest I tried cleaning it up for awhile, but settled on just writing a bunch of tests instead. Once you get an intuitive sense for the nuance here you can try and see theres a better way to spell the boolean logic but it all ends up looking sorta hairy. -1 if info is correct, 1 if remote_info is better Talk directly to the primary container servers to delete a particular object listing. Does not talk to object servers; use this only when a container entry does not actually have a corresponding object. Get the name of a container into which a misplaced object should be enqueued. The name is the objects last modified time rounded down to the nearest hour. objtimestamp a string representation of the objects createdat time from its container db row. a container name Compare remote_info to info and decide if the remote storage policy index should be used instead of ours. Translate a reconciler container listing entry to a dictionary containing the parts of the misplaced object queue" }, { "data": "obj_info an entry in an a container listing with the required keys: name, content_type, and hash a queue entry dict with the keys: qpolicyindex, account, container, obj, qop, qts, q_record, and path Bases: object Encapsulates metadata associated with the process of cleaving a retiring DB. This metadata includes: ref: The unique part of the key that is used when persisting a serialized CleavingContext as sysmeta in the DB. The unique part of the key is based off the DB id. This ensures that each context is associated with a specific DB file. The unique part of the key is included in the CleavingContext but should not be modified by any caller. cursor: the upper bound of the last shard range to have been cleaved from the retiring DB. max_row: the retiring DBs max row; this is updated to the value of the retiring DBs max_row every time a CleavingContext is loaded for that DB, and may change during the process of cleaving the DB. cleavetorow: the value of max_row at the moment when cleaving starts for the DB. When cleaving completes (i.e. the cleave cursor has reached the upper bound of the cleaving namespace), cleavetorow is compared to the current max_row: if the two values are not equal then rows have been added to the DB which may not have been cleaved, in which case the CleavingContext is reset and cleaving is re-started. lastcleaveto_row: the minimum DB row from which cleaving should select objects to cleave; this is initially set to None i.e. all rows should be cleaved. If the CleavingContext is reset then the lastcleaveto_row is set to the current value of cleavetorow, which in turn is set to the current value of max_row by a subsequent call to start. The repeated cleaving therefore only selects objects in rows greater than the lastcleaveto_row, rather than cleaving the whole DB again. ranges_done: the number of shard ranges that have been cleaved from the retiring DB. ranges_todo: the number of shard ranges that are yet to be cleaved from the retiring DB. Returns a CleavingContext tracking the cleaving progress of the given brokers DB. broker an instances of ContainerBroker An instance of CleavingContext. Returns all cleaving contexts stored in the brokers DB. broker an instance of ContainerBroker list of tuples of (CleavingContext, timestamp) Persists the serialized CleavingContext as sysmeta in the given brokers DB. broker an instances of ContainerBroker Bases: ContainerSharderConf, ContainerReplicator Shards containers. Run the container sharder until stopped. Run the container sharder once. Iterates through all object rows in srcshardrange in name order yielding them in lists of up to batch_size in length. All batches of rows that are not marked deleted are yielded before all batches of rows that are marked deleted. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. since_row include only object rows whose ROWID is greater than the given row id; by default all object rows are included. batch_size The maximum number of object rows to include in each yielded batch; defaults to cleaverowbatch_size. a generator of tuples of (list of rows, broker info dict) Iterates through all object rows in srcshardrange to place them in destination shard ranges provided by the destshardranges function. Yields tuples of (batch of object rows, destination shard range in which those object rows belong, broker" }, { "data": "If no destination shard range exists for a batch of object rows then tuples are yielded of (batch of object rows, None, broker info). This indicates to the caller that there are a non-zero number of object rows for which no destination shard range was found. Note that the same destination shard range may be referenced in more than one yielded tuple. broker A ContainerBroker. srcshardrange A ShardRange describing the source range. destshardranges A function which should return a list of destination shard ranges sorted in the order defined by sort_key(). a generator of tuples of (object row list, shard range, broker info dict) where shard_range may be None. Bases: object Combines new and existing shard ranges based on most recent state. newshardranges a list of ShardRange instances. existingshardranges a list of ShardRange instances. a list of ShardRange instances. Update donor shard ranges to shrinking state and merge donors and acceptors to broker. broker A ContainerBroker. acceptor_ranges A list of ShardRange that are to be acceptors. donor_ranges A list of ShardRange that are to be donors; these will have their state and timestamp updated. timestamp timestamp to use when updating donor state Find sequences of shard ranges that could be compacted into a single acceptor shard range. This function does not modify shard ranges. broker A ContainerBroker. shrink_threshold the number of rows below which a shard may be considered for shrinking into another shard expansion_limit the maximum number of rows that an acceptor shard range should have after other shard ranges have been compacted into it max_shrinking the maximum number of shard ranges that should be compacted into each acceptor; -1 implies unlimited. max_expanding the maximum number of acceptors to be found (i.e. the maximum number of sequences to be returned); -1 implies unlimited. include_shrinking if True then existing compactible sequences are included in the results; default is False. A list of ShardRangeList each containing a sequence of neighbouring shard ranges that may be compacted; the final shard range in the list is the acceptor Find all pairs of overlapping ranges in the given list. shard_ranges A list of ShardRange excludeparentchild If True then overlapping pairs that have a parent-child relationship within the past time period time_period are excluded from the returned set. Default is False. time_period the specified past time period in seconds. Value of 0 means all time in the past. a set of tuples, each tuple containing ranges that overlap with each other. Returns a list of all continuous paths through the shard ranges. An individual path may not necessarily span the entire namespace, but it will span a continuous namespace without gaps. shard_ranges A list of ShardRange. A list of ShardRangeList. Find gaps in the shard ranges and pairs of shard range paths that lead to and from those gaps. For each gap a single pair of adjacent paths is selected. The concatenation of all selected paths and gaps will span the entire namespace with no overlaps. shard_ranges a list of instances of ShardRange. within_range an optional ShardRange that constrains the search space; the method will only return gaps within this range. The default is the entire namespace. A list of tuples of (startpath, gaprange, end_path) where start_path is a list of ShardRanges leading to the gap, gap_range is a ShardRange synthesized to describe the namespace gap, and end_path is a list of ShardRanges leading from the" }, { "data": "When gaps start or end at the namespace minimum or maximum bounds, startpath and endpath may be null paths that contain a single ShardRange covering either the minimum or maximum of the namespace. Transform the given sequences of shard ranges into a list of acceptors and a list of shrinking donors. For each given sequence the final ShardRange in the sequence (the acceptor) is expanded to accommodate the other ShardRanges in the sequence (the donors). The donors and acceptors are then merged into the broker. broker A ContainerBroker. sequences A list of ShardRangeList Sorts the given list of paths such that the most preferred path is the first item in the list. paths A list of ShardRangeList. shardrangeto_span An instance of ShardRange that describes the namespace that would ideally be spanned by a path. Paths that include this namespace will be preferred over those that do not. A sorted list of ShardRangeList. Update the ownshardrange with the up-to-date object stats from the broker. Note: this method does not persist the updated ownshardrange; callers should use broker.mergeshardranges if the updated stats need to be persisted. broker an instance of ContainerBroker. ownshardrange and instance of ShardRange. ownshardrange with up-to-date object_count and bytes_used. Bases: Daemon Daemon to sync syncable containers. This is done by scanning the local devices for container databases and checking for x-container-sync-to and x-container-sync-key metadata values. If they exist, newer rows since the last sync will trigger PUTs or DELETEs to the other container. The actual syncing is slightly more complicated to make use of the three (or number-of-replicas) main nodes for a container without each trying to do the exact same work but also without missing work if one node happens to be down. Two sync points are kept per container database. All rows between the two sync points trigger updates. Any rows newer than both sync points cause updates depending on the nodes position for the container (primary nodes do one third, etc. depending on the replica count of course). After a sync run, the first sync point is set to the newest ROWID known and the second sync point is set to newest ROWID for which all updates have been sent. An example may help. Assume replica count is 3 and perfectly matching ROWIDs starting at 1. First sync run, database has 6 rows: SyncPoint1 starts as -1. SyncPoint2 starts as -1. No rows between points, so no all updates rows. Six rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node 3. SyncPoint1 is set as 6 (the newest ROWID known). SyncPoint2 is left as -1 since no all updates rows were synced. Next sync run, database has 12 rows: SyncPoint1 starts as 6. SyncPoint2 starts as -1. The rows between -1 and 6 all trigger updates (most of which should short-circuit on the remote end as having already been done). Six more rows newer than SyncPoint1, so a third of the rows are sent by node 1, another third by node 2, remaining third by node SyncPoint1 is set as 12 (the newest ROWID known). SyncPoint2 is set as 6 (the newest all updates ROWID). In this way, under normal circumstances each node sends its share of updates each run and just sends a batch of older updates to ensure nothing was missed. conf The dict of configuration values from the [container-sync] section of the" }, { "data": "containerring If None, the <swiftdir>/container.ring.gz will be loaded. This is overridden by unit tests. The list of hosts were allowed to send syncs to. This can be overridden by data in self.realms_conf The dict of configuration values from the [container-sync] section of the container-server.conf. Number of successful DELETEs triggered. Number of containers that had a failure of some type. Number of successful PUTs triggered. swift.common.ring.Ring for locating containers. Number of containers whose sync has been turned off, but are not yet cleared from the sync store. Per container stats. These are collected per container. puts - the number of puts that were done for the container deletes - the number of deletes that were fot the container bytes - the total number of bytes transferred per the container Checks the given path for a container database, determines if syncing is turned on for that database and, if so, sends any updates to the other container. path the path to a container db Sends the update the row indicates to the sync_to container. Update can be either delete or put. row The updated row in the local database triggering the sync update. sync_to The URL to the remote container. user_key The X-Container-Sync-Key to use when sending requests to the other container. broker The local container database broker. info The get_info result from the local container database broker. realm The realm from self.realms_conf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. realmkey The realm key from self.realmsconf, if there is one. If None, fallback to using the older allowedsynchosts way of syncing. True on success Number of containers with sync turned on that were successfully synced. Maximum amount of time to spend syncing a container before moving on to the next one. If a container sync hasnt finished in this time, itll just be resumed next scan. Path to the local device mount points. Minimum time between full scans. This is to keep the daemon from running wild on near empty systems. Logger to use for container-sync log lines. Indicates whether mount points should be verified as actual mount points (normally true, false for tests and SAIO). ContainerSyncCluster instance for validating sync-to values. Writes a report of the stats to the logger and resets the stats for the next report. Time of last stats report. Runs container sync scans until stopped. Runs a single container sync scan. ContainerSyncStore instance for iterating over synced containers Bases: Daemon Update container information in account listings. Report container info to an account server. node node dictionary from the account ring part partition the account is on container container name put_timestamp put timestamp delete_timestamp delete timestamp count object count in the container bytes bytes used in the container storagepolicyindex the policy index for the container Walk the path looking for container DBs and process them. path path to walk Get the account ring. Load it if it hasnt been yet. Get paths to all of the partitions on each drive to be processed. a list of paths Process a container, and update the information in the account. dbfile container DB to process Run the updater continuously. Run the updater once. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "object.html#module-swift.obj.diskfile.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Database code for Swift Timeout for trying to connect to a DB Whether calls will be made to preallocate disk space for database files. Bases: DatabaseError More friendly error messages for DB Errors. Bases: object Encapsulates working with a database. Mark the DB as deleted timestamp internalized delete timestamp Check if the broker abstraction contains any undeleted records. Use with the with statement; returns a database connection. Get a list of objects in the database between start and end. start start ROWID count number to get list of objects between start and end Get information about the DB required for replication. dict containing keys from getinfo plus maxrow and metadata count and metadata is the raw string. Gets the most recent sync point for a server from the sync table. id remote ID to get the sync_point for incoming if True, get the last incoming sync, otherwise get the last outgoing sync the sync point, or -1 if the id doesnt exist. Get a serialized copy of the sync table. incoming if True, get the last incoming sync, otherwise get the last outgoing sync includetimestamp If True include the updatedat timestamp list of {remoteid, syncpoint} or {remoteid, syncpoint, updated_at} if include_timestamp is True. Create the DB The storagepolicyindex is passed through to the subclasss _initialize method. It is ignored by AccountBroker. put_timestamp internalized timestamp of initial PUT request storagepolicyindex only required for containers Check if the DB is considered to be deleted. True if the DB is considered to be deleted, False otherwise Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age. Use with the with statement; locks a database. Turn this db record dict into the format this service uses for pending pickles. Save :param:item_list to the database. Merge a list of sync points with the incoming sync table. sync_points list of sync points where a sync point is a dict of {syncpoint, remoteid} incoming if True, get the last incoming sync, otherwise get the last outgoing sync Used in replication to handle updating timestamps. created_at create timestamp put_timestamp put timestamp delete_timestamp delete timestamp Returns the metadata dict for the database. The metadata dict values are tuples of (value, timestamp) where the timestamp indicates when that key was set to that value. Re-id the database. This should be called after an rsync. remote_id the ID of the remote database being rsynced in Checks the exception info to see if it indicates a quarantine situation (malformed or corrupted database). If not, the original exception will be reraised. If so, the database will be quarantined and a new" }, { "data": "will be raised indicating the action taken. Put a record into the DB. If the DB has an associated pending file with space then the record is appended to that file and a commit to the DB is deferred. If its pending file is full then the record will be committed immediately. record a record to be added to the DB. DatabaseConnectionError if the DB file does not exist or if skip_commits is True. LockTimeout if a timeout occurs while waiting to take a lock to write to the pending file. The database will be quarantined and a sqlite3.DatabaseError will be raised indicating the action taken. Delete reclaimable rows and metadata from the db. By default this method will delete rows from the dbcontainstype table that are marked deleted and whose created_at timestamp is < agetimestamp, and deletes rows from incomingsync and outgoing_sync where the updatedat timestamp is < synctimestamp. In addition, this calls the reclaimmetadata() method. Subclasses may reclaim other items by overriding _reclaim(). agetimestamp max createdat timestamp of object rows to delete synctimestamp max updateat timestamp of sync rows to delete Updates the metadata dict for the database. The metadata dict values are tuples of (value, timestamp) where the timestamp indicates when that key was set to that value. Key/values will only be overwritten if the timestamp is newer. To delete a key, set its value to (, timestamp). These empty keys will eventually be removed by reclaim() Update the put_timestamp. Only modifies it if it is greater than the current timestamp. timestamp internalized put timestamp Update the statuschangedat field in the stat table. Only modifies statuschangedat if the timestamp is greater than the current statuschangedat timestamp. timestamp internalized timestamp Use with with statement; updates timeout within the block. Validates that metadata falls within acceptable limits. metadata to be validated HTTPBadRequest if MAXMETACOUNT or MAXMETAOVERALL_SIZE is exceeded, or if metadata contains non-UTF-8 data Bases: DatabaseError More friendly error messages for DB Errors. Bases: Connection SQLite DB Connection handler that plays well with eventlet. Commit any pending transaction to the database. If there is no open transaction, this method is a no-op. Return a cursor for the connection. Executes an SQL statement. Bases: Cursor SQLite Cursor handler that plays well with eventlet. Executes an SQL statement. Pickle protocol to use Whether calls will be made to log queries (py3 only) Bases: object Encapsulates reclamation of deleted rows in a database. Return the number of remaining tombstones newer than age_timestamp. Executes the reclaim method if it has not already been called on this instance. The number of tombstones in the broker that are newer than" }, { "data": "Perform reclaim of deleted rows older than age_timestamp. Each entry in the account and container databases is XORed by the 128-bit hash on insert or delete. This serves as a rolling, order-independent hash of the contents. (check + XOR) old hex representation of the current DB hash name name of the object or container being inserted timestamp internalized timestamp of the new record a hex representation of the new hash value This should only be used when you need a real dict, i.e. when youre going to serialize the results. Returns a properly configured SQLite database connection. path path to DB timeout timeout for connection okaytocreate if True, create the DB if it doesnt exist DB connection object Weve cargo culted our consumers to be tolerant of various expressions of zero in our databases for backwards compatibility with less disciplined producers. Bases: BufferedHTTPConnection Helper to simplify REPLICATEing to a remote server. Make an HTTP REPLICATE request args list of json-encodable objects bufferedhttp response object Bases: Daemon Implements the logic for directing db replication. Cleanup non primary database from disk if needed. broker the broker for the database were replicating orig_info snapshot of the broker replication info dict taken before replication responses a list of boolean success values for each replication request to other nodes returns False if deletion of the database was attempted but unsuccessful, otherwise returns True. Extract the device name from an object path. Returns UNKNOWN if the path could not be extracted successfully for some reason. object_file the path to a database file. Replicate dbs under the given root in an infinite loop. Run a replication pass once. Bases: object Handle Replication RPC calls. TODO(redbo): document please :) True if the directory name is a valid partition number, False otherwise. In the case that a corrupt file is found, move it to a quarantined area to allow replication to fix it. object_file path to corrupt file server_type type of file that is corrupt (container or account) Generator to walk the data dirs in a round robin manner, evenly hitting each device on the system, and yielding any .db files found (in their proper places). The partitions within each data dir are walked randomly, however. datadirs a list of tuples of (path, context, partition_filter) to walk. The context may be any object; the context is not used by this function but is included with each yielded tuple. A generator of (partition, pathtodb_file, context) Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "object_versioning.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Your Object Storage system might not enable all features that you read about because your service provider chooses which features to enable. To discover which features are enabled in your Object Storage system, use the /info request. However, your service provider might have disabled the /info request, or you might be using an older version that does not support the /info request. To use the /info request, send a GET request using the /info path to the Object Store endpoint as shown in this example: ``` ``` This example shows a truncated response body: ``` { \"swift\":{ \"version\":\"1.11.0\" }, \"staticweb\":{ }, \"tempurl\":{ } } ``` This output shows that the Object Storage system has enabled the static website and temporary URL features. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "overview_global_cluster.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swifts default configuration is currently designed to work in a single region, where a region is defined as a group of machines with high-bandwidth, low-latency links between them. However, configuration options exist that make running a performant multi-region Swift cluster possible. For the rest of this section, we will assume a two-region Swift cluster: region 1 in San Francisco (SF), and region 2 in New York (NY). Each region shall contain within it 3 zones, numbered 1, 2, and 3, for a total of 6 zones. Note The proxy-server configuration options described below can be given generic settings in the [app:proxy-server] configuration section and/or given specific settings for individual policies using Per policy configuration. This setting, combined with sorting_method setting, makes the proxy server prefer local backend servers for GET and HEAD requests over non-local ones. For example, it is preferable for an SF proxy server to service object GET requests by talking to SF object servers, as the client will receive lower latency and higher throughput. By default, Swift randomly chooses one of the three replicas to give to the client, thereby spreading the load evenly. In the case of a geographically-distributed cluster, the administrator is likely to prioritize keeping traffic local over even distribution of results. This is where the read_affinity setting comes in. Example: ``` [app:proxy-server] sorting_method = affinity read_affinity = r1=100 ``` This will make the proxy attempt to service GET and HEAD requests from backends in region 1 before contacting any backends in region 2. However, if no region 1 backends are available (due to replica placement, failed hardware, or other reasons), then the proxy will fall back to backend servers in other regions. Example: ``` [app:proxy-server] sorting_method = affinity read_affinity = r1z1=100, r1=200 ``` This will make the proxy attempt to service GET and HEAD requests from backends in region 1 zone 1, then backends in region 1, then any other backends. If a proxy is physically close to a particular zone or zones, this can provide bandwidth savings. For example, if a zone corresponds to servers in a particular rack, and the proxy server is in that same rack, then setting read_affinity to prefer reads from within the rack will result in less traffic between the top-of-rack switches. The read_affinity setting may contain any number of region/zone specifiers; the priority number (after the equals sign) determines the ordering in which backend servers will be contacted. A lower number means higher priority. Note that read_affinity only affects the ordering of primary nodes (see ring docs for definition of primary node), not the ordering of handoff nodes. This setting makes the proxy server prefer local backend servers for object PUT requests over non-local ones. For example, it may be preferable for an SF proxy server to service object PUT requests by talking to SF object servers, as the client will receive lower latency and higher" }, { "data": "However, if this setting is used, note that a NY proxy server handling a GET request for an object that was PUT using write affinity may have to fetch it across the WAN link, as the object wont immediately have any replicas in NY. However, replication will move the objects replicas to their proper homes in both SF and NY. One potential issue with write_affinity is, end user may get 404 error when deleting objects before replication. The writeaffinityhandoffdeletecount setting is used together with write_affinity in order to solve that issue. With its default configuration, Swift will calculate the proper number of handoff nodes to send requests to. Note that only object PUT/DELETE requests are affected by the write_affinity setting; POST, GET, HEAD, OPTIONS, and account/container PUT requests are not affected. This setting lets you trade data distribution for throughput. If write_affinity is enabled, then object replicas will initially be stored all within a particular region or zone, thereby decreasing the quality of the data distribution, but the replicas will be distributed over fast WAN links, giving higher throughput to clients. Note that the replicators will eventually move objects to their proper, well-distributed homes. The write_affinity setting is useful only when you dont typically read objects immediately after writing them. For example, consider a workload of mainly backups: if you have a bunch of machines in NY that periodically write backups to Swift, then odds are that you dont then immediately read those backups in SF. If your workload doesnt look like that, then you probably shouldnt use write_affinity. The writeaffinitynode_count setting is only useful in conjunction with write_affinity; it governs how many local object servers will be tried before falling back to non-local ones. Example: ``` [app:proxy-server] write_affinity = r1 writeaffinitynode_count = 2 * replicas ``` Assuming 3 replicas, this configuration will make object PUTs try storing the objects replicas on up to 6 disks (2 * replicas) in region 1 (r1). Proxy server tries to find 3 devices for storing the object. While a device is unavailable, it queries the ring for the 4th device and so on until 6th device. If the 6th disk is still unavailable, the last replica will be sent to other region. It doesnt mean therell have 6 replicas in region 1. You should be aware that, if you have data coming into SF faster than your replicators are transferring it to NY, then your clusters data distribution will get worse and worse over time as objects pile up in SF. If this happens, it is recommended to disable write_affinity and simply let object PUTs traverse the WAN link, as that will naturally limit the object growth rate to what your WAN link can handle. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "overview_large_objects.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Normally to create, read and modify containers and objects, you must have the appropriate roles on the project associated with the account, i.e., you must be the owner of the account. However, an owner can grant access to other users by using an Access Control List (ACL). There are two types of ACLs: Container ACLs. These are specified on a container and apply to that container only and the objects in the container. Account ACLs. These are specified at the account level and apply to all containers and objects in the account. Container ACLs are stored in the X-Container-Write and X-Container-Read metadata. The scope of the ACL is limited to the container where the metadata is set and the objects in the container. In addition: X-Container-Write grants the ability to perform PUT, POST and DELETE operations on objects within a container. It does not grant the ability to perform POST or DELETE operations on the container itself. Some ACL elements also grant the ability to perform HEAD or GET operations on the container. X-Container-Read grants the ability to perform GET and HEAD operations on objects within a container. Some of the ACL elements also grant the ability to perform HEAD or GET operations on the container itself. However, a container ACL does not allow access to privileged metadata (such as X-Container-Sync-Key). Container ACLs use the V1 ACL syntax which is a comma separated string of elements as shown in the following example: ``` .r:,.rlistings,7ec59e87c6584c348b563254aae4c221: ``` Spaces may occur between elements as shown in the following example: ``` .r : , .rlistings, 7ec59e87c6584c348b563254aae4c221: ``` However, these spaces are removed from the value stored in the X-Container-Write and X-Container-Read metadata. In addition, the .r: string can be written as .referrer:, but is stored as .r:. While all auth systems use the same syntax, the meaning of some elements is different because of the different concepts used by different auth systems as explained in the following sections: Common ACL Elements Keystone Auth ACL Elements TempAuth ACL Elements The following table describes elements of an ACL that are supported by both Keystone auth and TempAuth. These elements should only be used with X-Container-Read (with the exception of .rlistings, an error will occur if used with X-Container-Write): | Element | Description | |:|:--| | .r:* | Any user has access to objects. No token is required in the request. | | .r:<referrer> | The referrer is granted access to objects. The referrer is identified by the Referer request header in the request. No token is required. | | .r:-<referrer> | This syntax (with - prepended to the referrer) is supported. However, it does not deny access if another element (e.g., .r:*) grants access. | | .rlistings | Any user can perform a HEAD or GET operation on the container provided the user also has read access on objects (e.g., also has .r:* or .r:<referrer>. No token is" }, { "data": "| Element Description .r:* Any user has access to objects. No token is required in the request. .r:<referrer> The referrer is granted access to objects. The referrer is identified by the Referer request header in the request. No token is required. .r:-<referrer> This syntax (with - prepended to the referrer) is supported. However, it does not deny access if another element (e.g., .r:*) grants access. .rlistings Any user can perform a HEAD or GET operation on the container provided the user also has read access on objects (e.g., also has .r:* or .r:<referrer>. No token is required. The following table describes elements of an ACL that are supported only by Keystone auth. Keystone auth also supports the elements described in Common ACL Elements. A token must be included in the request for any of these ACL elements to take effect. | Element | Description | |:--|:-| | <project-id>:<user-id> | The specified user, provided a token scoped to the project is included in the request, is granted access. Access to the container is also granted when used in X-Container-Read. | | <project-id>:* | Any user with a role in the specified Keystone project has access. A token scoped to the project must be included in the request. Access to the container is also granted when used in X-Container-Read. | | *:<user-id> | The specified user has access. A token for the user (scoped to any project) must be included in the request. Access to the container is also granted when used in X-Container-Read. | | : | Any user has access. Access to the container is also granted when used in X-Container-Read. The : element differs from the .r: element because : requires that a valid token is included in the request whereas .r: does not require a token. In addition, .r:* does not grant access to the container listing. | | <role_name> | A user with the specified role name on the project within which the container is stored is granted access. A user token scoped to the project must be included in the request. Access to the container is also granted when used in X-Container-Read. | Element Description <project-id>:<user-id> The specified user, provided a token scoped to the project is included in the request, is granted access. Access to the container is also granted when used in X-Container-Read. <project-id>:* Any user with a role in the specified Keystone project has access. A token scoped to the project must be included in the request. Access to the container is also granted when used in X-Container-Read. *:<user-id> The specified user has access. A token for the user (scoped to any project) must be included in the request. Access to the container is also granted when used in X-Container-Read. : Any user has access. Access to the container is also granted when used in X-Container-Read. The : element differs from the" }, { "data": "element because : requires that a valid token is included in the request whereas .r:* does not require a token. In addition, .r:* does not grant access to the container listing. <role_name> A user with the specified role name on the project within which the container is stored is granted access. A user token scoped to the project must be included in the request. Access to the container is also granted when used in X-Container-Read. Note Keystone project (tenant) or user names (i.e., <project-name>:<user-name>) must no longer be used because with the introduction of domains in Keystone, names are not globally unique. You should use user and project ids instead. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee project, the grantee user and the project being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the Keystone V2 API) or are all in the default domain to which legacy accounts would have been migrated. The following table describes elements of an ACL that are supported only by TempAuth. TempAuth auth also supports the elements described in Common ACL Elements. | Element | Description | |:|:-| | <user-name> | The named user is granted access. The wildcard (*) character is not supported. A token from the user must be included in the request. | Element Description <user-name> The named user is granted access. The wildcard (*) character is not supported. A token from the user must be included in the request. Container ACLs may be set by including X-Container-Write and/or X-Container-Read headers with a PUT or a POST request to the container URL. The following examples use the swift command line client which support these headers being set via its --write-acl and --read-acl options. The following allows anybody to list objects in the www container and download objects. The users do not need to include a token in their request. This ACL is commonly referred to as making the container public. It is useful when used with StaticWeb: ``` swift post www --read-acl \".r:*,.rlistings\" ``` The following allows anybody to upload or download objects. However, to download an object, the exact name of the object must be known since users cannot list the objects in the container. The users must include a Keystone token in the upload request. However, it does not need to be scoped to the project associated with the container: ``` swift post www --read-acl \".r:\" --write-acl \":*\" ``` The following allows any member of the 77b8f82565f14814bece56e50c4c240f project to upload and download objects or to list the contents of the www" }, { "data": "A token scoped to the 77b8f82565f14814bece56e50c4c240f project must be included in the request: ``` swift post www --read-acl \"77b8f82565f14814bece56e50c4c240f:*\" \\ --write-acl \"77b8f82565f14814bece56e50c4c240f:*\" ``` The following allows any user that has been assigned the myreadaccess_role on the project within which the www container is stored to download objects or to list the contents of the www container. A user token scoped to the project must be included in the download or list request: ``` swift post www --read-acl \"myreadaccess_role\" ``` The following allows any request from the example.com domain to access an object in the container: ``` swift post www --read-acl \".r:.example.com\" ``` However, the request from the user must contain the appropriate Referer header as shown in this example request: ``` curl -i $publicURL/www/document --head -H \"Referer: http://www.example.com/index.html\" ``` Note The Referer header is included in requests by many browsers. However, since it is easy to create a request with any desired value in the Referer header, the referrer ACL has very weak security. Sharing a Container with another user requires the knowledge of few parameters regarding the users. The sharing user must know: the OpenStack user id of the other user The sharing user must communicate to the other user: the name of the shared container the OSSTORAGEURL Usually the OSSTORAGEURL is not exposed directly to the user because the swift client by default automatically construct the OSSTORAGEURL based on the User credential. We assume that in the current directory there are the two client environment script for the two users sharing.openrc and other.openrc. The sharing.openrc should be similar to the following: ``` export OS_USERNAME=sharing export OS_PASSWORD=password export OSTENANTNAME=projectName export OSAUTHURL=https://identityHost:portNumber/v2.0 export OSTENANTID=tenantIDString export OSREGIONNAME=regionName export OS_CACERT=/path/to/cacertFile ``` The other.openrc should be similar to the following: ``` export OS_USERNAME=other export OS_PASSWORD=otherPassword export OSTENANTNAME=otherProjectName export OSAUTHURL=https://identityHost:portNumber/v2.0 export OSTENANTID=tenantIDString export OSREGIONNAME=regionName export OS_CACERT=/path/to/cacertFile ``` For more information see using the OpenStack RC file First we figure out the other user id: ``` . other.openrc OUID=\"$(openstack user show --format json \"${OS_USERNAME}\" | jq -r .id)\" ``` or alternatively: ``` . other.openrc OUID=\"$(openstack token issue -f json | jq -r .user_id)\" ``` Then we figure out the storage url of the sharing user: ``` sharing.openrc SURL=\"$(swift auth | awk -F = '/OSSTORAGEURL/ {print $2}')\" ``` Running as the sharing user create a shared container named shared in read-only mode with the other user using the proper acl: ``` sharing.openrc swift post --read-acl \"*:${OUID}\" shared ``` Running as the sharing user create and upload a test file: ``` touch void swift upload shared void ``` Running as the other user list the files in the shared container: ``` other.openrc swift --os-storage-url=\"${SURL}\" list shared ``` Running as the other user download the shared container in the /tmp directory: ``` cd /tmp swift --os-storage-url=\"${SURL}\" download shared ``` Note Account ACLs are not currently supported by Keystone auth The X-Account-Access-Control header is used to specify account-level ACLs in a format specific to the auth system. These headers are visible and settable only by account owners (those for whom swift_owner is true). Behavior of account ACLs is" }, { "data": "In the case of TempAuth, if an authenticated user has membership in a group which is listed in the ACL, then the user is allowed the access level of that ACL. Account ACLs use the V2 ACL syntax, which is a JSON dictionary with keys named admin, read-write, and read-only. (Note the case sensitivity.) An example value for the X-Account-Access-Control header looks like this, where a, b and c are user names: ``` {\"admin\":[\"a\",\"b\"],\"read-only\":[\"c\"]} ``` Keys may be absent (as shown in above example). The recommended way to generate ACL strings is as follows: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } aclstring = formatacl(version=2, acldict=acldata) ``` Using the format_acl() method will ensure that JSON is encoded as ASCII (using e.g. u1234 for Unicode). While its permissible to manually send curl commands containing X-Account-Access-Control headers, you should exercise caution when doing so, due to the potential for human error. Within the JSON dictionary stored in X-Account-Access-Control, the keys have the following meanings: | Access Level | Description | |:|:--| | read-only | These identities can read everything (except privileged headers) in the account. Specifically, a user with read-only account access can get a list of containers in the account, list the contents of any container, retrieve any object, and see the (non-privileged) headers of the account, any container, or any object. | | read-write | These identities can read or write (or create) any container. A user with read-write account access can create new containers, set any unprivileged container headers, overwrite objects, delete containers, etc. A read-write user can NOT set account headers (or perform any PUT/POST/DELETE requests on the account). | | admin | These identities have swift_owner privileges. A user with admin account access can do anything the account owner can, including setting account headers and any privileged headers and thus granting read-only, read-write, or admin access to other users. | Access Level Description read-only These identities can read everything (except privileged headers) in the account. Specifically, a user with read-only account access can get a list of containers in the account, list the contents of any container, retrieve any object, and see the (non-privileged) headers of the account, any container, or any object. read-write These identities can read or write (or create) any container. A user with read-write account access can create new containers, set any unprivileged container headers, overwrite objects, delete containers, etc. A read-write user can NOT set account headers (or perform any PUT/POST/DELETE requests on the account). admin These identities have swift_owner privileges. A user with admin account access can do anything the account owner can, including setting account headers and any privileged headers and thus granting read-only, read-write, or admin access to other users. For more details, see swift.common.middleware.tempauth. For details on the ACL format, see swift.common.middleware.acl. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "overview_replication.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Because each replica in Swift functions independently, and clients generally require only a simple majority of nodes responding to consider an operation successful, transient failures like network partitions can quickly cause replicas to diverge. These differences are eventually reconciled by asynchronous, peer-to-peer replicator processes. The replicator processes traverse their local filesystems, concurrently performing operations in a manner that balances load across physical disks. Replication uses a push model, with records and files generally only being copied from local to remote replicas. This is important because data on the node may not belong there (as in the case of handoffs and ring changes), and a replicator cant know what data exists elsewhere in the cluster that it should pull in. Its the duty of any node that contains data to ensure that data gets to where it belongs. Replica placement is handled by the ring. Every deleted record or file in the system is marked by a tombstone, so that deletions can be replicated alongside creations. The replication process cleans up tombstones after a time period known as the consistency window. The consistency window encompasses replication duration and how long transient failure can remove a node from the cluster. Tombstone cleanup must be tied to replication to reach replica convergence. If a replicator detects that a remote drive has failed, the replicator uses the getmorenodes interface for the ring to choose an alternate node with which to synchronize. The replicator can maintain desired levels of replication in the face of disk failures, though some replicas may not be in an immediately usable location. Note that the replicator doesnt maintain desired levels of replication when other failures, such as entire node failures, occur because most failure are transient. Replication is an area of active development, and likely rife with potential improvements to speed and correctness. There are two major classes of replicator - the db replicator, which replicates accounts and containers, and the object replicator, which replicates object data. The first step performed by db replication is a low-cost hash comparison to determine whether two replicas already match. Under normal operation, this check is able to verify that most databases in the system are already synchronized very quickly. If the hashes differ, the replicator brings the databases in sync by sharing records added since the last sync point. This sync point is a high water mark noting the last record at which two databases were known to be in sync, and is stored in each database as a tuple of the remote database id and record" }, { "data": "Database ids are unique amongst all replicas of the database, and record ids are monotonically increasing integers. After all new records have been pushed to the remote database, the entire sync table of the local database is pushed, so the remote database can guarantee that it is in sync with everything with which the local database has previously synchronized. If a replica is found to be missing entirely, the whole local database file is transmitted to the peer using rsync(1) and vested with a new unique id. In practice, DB replication can process hundreds of databases per concurrency setting per second (up to the number of available CPUs or disks) and is bound by the number of DB transactions that must be performed. The initial implementation of object replication simply performed an rsync to push data from a local partition to all remote servers it was expected to exist on. While this performed adequately at small scale, replication times skyrocketed once directory structures could no longer be held in RAM. We now use a modification of this scheme in which a hash of the contents for each suffix directory is saved to a per-partition hashes file. The hash for a suffix directory is invalidated when the contents of that suffix directory are modified. The object replication process reads in these hash files, calculating any invalidated hashes. It then transmits the hashes to each remote server that should hold the partition, and only suffix directories with differing hashes on the remote server are rsynced. After pushing files to the remote server, the replication process notifies it to recalculate hashes for the rsynced suffix directories. Performance of object replication is generally bound by the number of uncached directories it has to traverse, usually as a result of invalidated suffix directory hashes. Using write volume and partition counts from our running systems, it was designed so that around 2% of the hash space on a normal node will be invalidated per day, which has experimentally given us acceptable replication speeds. Work continues with a new ssync method where rsync is not used at all and instead all-Swift code is used to transfer the objects. At first, this ssync will just strive to emulate the rsync behavior. Once deemed stable it will open the way for future improvements in replication since well be able to easily add code in the replication path instead of trying to alter the rsync code base and distributing such modifications. One of the first improvements planned is an index.db that will replace the" }, { "data": "This will allow quicker updates to that data as well as more streamlined queries. Quite likely well implement a better scheme than the current one hashes.pkl uses (hash-trees, that sort of thing). Another improvement planned all along the way is separating the local disk structure from the protocol path structure. This separation will allow ring resizing at some point, or at least ring-doubling. Note that for objects being stored with an Erasure Code policy, the replicator daemon is not involved. Instead, the reconstructor is used by Erasure Code policies and is analogous to the replicator for Replication type policies. See Erasure Code Support for complete information on both Erasure Code support as well as the reconstructor. The hashes.pkl file is a key element for both replication and reconstruction (for Erasure Coding). Both daemons use this file to determine if any kind of action is required between nodes that are participating in the durability scheme. The file itself is a pickled dictionary with slightly different formats depending on whether the policy is Replication or Erasure Code. In either case, however, the same basic information is provided between the nodes. The dictionary contains a dictionary where the key is a suffix directory name and the value is the MD5 hash of the directory listing for that suffix. In this manner, the daemon can quickly identify differences between local and remote suffix directories on a per partition basis as the scope of any one hashes.pkl file is a partition directory. For Erasure Code policies, there is a little more information required. An objects hash directory may contain multiple fragments of a single object in the event that the node is acting as a handoff or perhaps if a rebalance is underway. Each fragment of an object is stored with a fragment index, so the hashes.pkl for an Erasure Code partition will still be a dictionary keyed on the suffix directory name, however, the value is another dictionary keyed on the fragment index with subsequent MD5 hashes for each one as values. Some files within an object hash directory dont require a fragment index so None is used to represent those. Below are examples of what these dictionaries might look like. Replication hashes.pkl: ``` {'a43': '72018c5fbfae934e1f56069ad4425627', 'b23': '12348c5fbfae934e1f56069ad4421234'} ``` Erasure Code hashes.pkl: ``` {'a43': {None: '72018c5fbfae934e1f56069ad4425627', 2: 'b6dd6db937cb8748f50a5b6e4bc3b808'}, 'b23': {None: '12348c5fbfae934e1f56069ad4421234', 1: '45676db937cb8748f50a5b6e4bc34567'}} ``` Swift has support for using dedicated network for replication traffic. For more information see Overview of dedicated replication network. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "overview_ring.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift development currently targets Ubuntu Server 22.04, but should work on most Linux platforms. Swift is written in Python and has these dependencies: Python (2.7 or 3.6-3.10) rsync 3.x liberasurecode The Python packages listed in the requirements file Testing additionally requires the test dependencies Testing requires these distribution packages To get started with development with Swift, or to just play around, the following docs will be useful: Swift All in One - Set up a VM with Swift installed Development Guidelines First Contribution to Swift Associated Projects There are many clients in the ecosystem. The official CLI and SDK is python-swiftclient. Source code Python Package Index If you want to set up and configure Swift for a production cluster, the following doc should be useful: Object Storage Install Guide Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "object.html#module-swift.obj.reconstructor.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "If configured, lists the activated capabilities for this version of the OpenStack Object Storage API. List activated capabilities Lists the activated capabilities for this version of the OpenStack Object Storage API. Most of the information is public i.e. visible to all callers. However, some configuration and capability items are reserved for the administrators of the system. To access this data, the swiftinfosig and swiftinfoexpires query parameters must be added to the request. Normal response codes: 200 Error response codes: | Name | In | Type | Description | |:--|:|:--|:--| | swiftinfosig (Optional) | query | string | A hash-based message authentication code (HMAC) that enables access to administrator-only information. To use this parameter, the swiftinfoexpires parameter is also required. | | swiftinfoexpires (Optional) | query | integer | The time at which swiftinfosig expires. The time is in UNIX Epoch time stamp format. | Name In Type Description swiftinfo_sig (Optional) query string A hash-based message authentication code (HMAC) that enables access to administrator-only information. To use this parameter, the swiftinfo_expires parameter is also required. swiftinfo_expires (Optional) query integer The time at which swiftinfo_sig expires. The time is in UNIX Epoch time stamp format. ``` { \"swift\": { \"version\": \"1.11.0\" }, \"slo\": { \"maxmanifestsegments\": 1000, \"maxmanifestsize\": 2097152, \"minsegmentsize\": 1 }, \"staticweb\": {}, \"tempurl\": {} } ``` Lists containers for an account. Creates, updates, shows, and deletes account metadata. For more information and concepts about accounts see Object Storage API overview. Show account details and list containers Shows details for an account and lists containers, sorted by name, in the account. The sort order for the name is based on a binary comparison, a single built-in collating sequence that compares string data by using the SQLite memcmp() function, regardless of text encoding. See Collating Sequences. The response body returns a list of containers. The default response (text/plain) returns one container per line. If you use query parameters to page through a long list of containers, you have reached the end of the list if the number of items in the returned list is less than the request limit value. The list contains more items if the number of items in the returned list equals the limit value. When asking for a list of containers and there are none, the response behavior changes depending on whether the request format is text, JSON, or XML. For a text response, you get a 204 , because there is no content. However, for a JSON or XML response, you get a 200 with content indicating an empty array. Example requests and responses: Show account details and list containers and ask for a JSON response: ``` curl -i $publicURL?format=json -X GET -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 200 OK Content-Length: 96 X-Account-Object-Count: 1 X-Timestamp: 1389453423.35964 X-Account-Meta-Subject: Literature X-Account-Bytes-Used: 14 X-Account-Container-Count: 2 Content-Type: application/json; charset=utf-8 Accept-Ranges: bytes X-Trans-Id: tx274a77a8975c4a66aeb24-0052d95365 X-Openstack-Request-Id: tx274a77a8975c4a66aeb24-0052d95365 Date: Fri, 17 Jan 2014 15:59:33 GMT ``` ``` [ { \"count\": 0, \"bytes\": 0, \"name\": \"janeausten\", \"last_modified\": \"2013-11-19T20:08:13.283452\" }, { \"count\": 1, \"bytes\": 14, \"name\": \"marktwain\", \"last_modified\": \"2016-04-29T16:23:50.460230\" } ] ``` Show account details and list containers and ask for an XML response: ``` curl -i $publicURL?format=xml -X GET -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 200 OK Content-Length: 262 X-Account-Object-Count: 1 X-Timestamp: 1389453423.35964 X-Account-Meta-Subject: Literature X-Account-Bytes-Used: 14 X-Account-Container-Count: 2 Content-Type: application/xml; charset=utf-8 Accept-Ranges: bytes X-Trans-Id: tx69f60bc9f7634a01988e6-0052d9544b X-Openstack-Request-Id: tx69f60bc9f7634a01988e6-0052d9544b Date: Fri, 17 Jan 2014 16:03:23 GMT ``` ``` <?xml version=\"1.0\" encoding=\"UTF-8\"?> <account name=\"my_account\"> <container> <name>janeausten</name> <count>0</count> <bytes>0</bytes> <lastmodified>2013-11-19T20:08:13.283452</lastmodified> </container> <container> <name>marktwain</name> <count>1</count> <bytes>14</bytes> <lastmodified>2016-04-29T16:23:50.460230</lastmodified> </container> </account> ``` If the request succeeds, the operation returns one of these status codes: OK (200)." }, { "data": "The response body lists the containers. No Content (204). Success. The response body shows no containers. Either the account has no containers or you are paging through a long list of names by using the marker, limit, or end_marker query parameter and you have reached the end of the list. Normal response codes: 200 Error response codes:204, | Name | In | Type | Description | |:-|:-|:--|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | limit (Optional) | query | integer | For an integer value n , limits the number of results to n . | | marker (Optional) | query | string | For a string value, x , constrains the list to items whose names are greater than x. | | end_marker (Optional) | query | string | For a string value, x , constrains the list to items whose names are less than x. | | format (Optional) | query | string | The response format. Valid values are json, xml, or plain. The default is plain. If you append the format=xml or format=json query parameter to the storage account URL, the response shows extended container information serialized in that format. If you append the format=plain query parameter, the response lists the container names separated by newlines. | | prefix (Optional) | query | string | Only objects with this prefix will be returned. When combined with a delimiter query, this enables API users to simulate and traverse the objects in a container as if they were in a directory tree. | | delimiter (Optional) | query | string | The delimiter is a single character used to split object names to present a pseudo-directory hierarchy of objects. When combined with a prefix query, this enables API users to simulate and traverse the objects in a container as if they were in a directory tree. | | reverse (Optional) | query | boolean | By default, listings are returned sorted by name, ascending. If you include the reverse=true query parameter, the listing will be returned sorted by name, descending. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Newest (Optional) | header | boolean | If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. | | Accept (Optional) | header | string | Instead of using the format query parameter, set this header to application/json, application/xml, or text/xml. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request" }, { "data": "For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. limit (Optional) query integer For an integer value n , limits the number of results to n . marker (Optional) query string For a string value, x , constrains the list to items whose names are greater than x. end_marker (Optional) query string For a string value, x , constrains the list to items whose names are less than x. format (Optional) query string The response format. Valid values are json, xml, or plain. The default is plain. If you append the format=xml or format=json query parameter to the storage account URL, the response shows extended container information serialized in that format. If you append the format=plain query parameter, the response lists the container names separated by newlines. prefix (Optional) query string Only objects with this prefix will be returned. When combined with a delimiter query, this enables API users to simulate and traverse the objects in a container as if they were in a directory tree. delimiter (Optional) query string The delimiter is a single character used to split object names to present a pseudo-directory hierarchy of objects. When combined with a prefix query, this enables API users to simulate and traverse the objects in a container as if they were in a directory tree. reverse (Optional) query boolean By default, listings are returned sorted by name, ascending. If you include the reverse=true query parameter, the listing will be returned sorted by name, descending. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Newest (Optional) header boolean If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. Accept (Optional) header string Instead of using the format query parameter, set this header to application/json, application/xml, or text/xml. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage" }, { "data": "You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:-|:-|:--|:--| | Content-Length | header | string | If the operation succeeds, the length of the response body in bytes. On error, this is the length of the error text. | | X-Account-Meta-name (Optional) | header | string | The custom account metadata item, where name is the name of the metadata item. One X-Account-Meta-name response header appears for each metadata item (for each name). | | X-Account-Meta-Temp-URL-Key (Optional) | header | string | The secret key value for temporary URLs. If not set, this header is not returned in the response. | | X-Account-Meta-Temp-URL-Key-2 (Optional) | header | string | The second secret key value for temporary URLs. If not set, this header is not returned in the response. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Account-Bytes-Used | header | integer | The total number of bytes that are stored in Object Storage for the account. | | X-Account-Container-Count | header | integer | The number of containers. | | X-Account-Object-Count | header | integer | The number of objects in the account. | | X-Account-Storage-Policy-name-Bytes-Used | header | integer | The total number of bytes that are stored in in a given storage policy, where name is the name of the storage policy. | | X-Account-Storage-Policy-name-Container-Count | header | integer | The number of containers in the account that use the given storage policy where name is the name of the storage policy. | | X-Account-Storage-Policy-name-Object-Count | header | integer | The number of objects in given storage policy where name is the name of the storage policy. | | X-Account-Meta-Quota-Bytes (Optional) | header | string | If present, this is the limit on the total size in bytes of objects stored in the account. Typically this value is set by an administrator. | | X-Account-Access-Control (Optional) | header | string | Note: X-Account-Access-Control is not supported by Keystone auth. The account access control list (ACL) that grants access to containers and objects in the account. If there is no ACL, this header is not returned by this operation. See Account ACLs for more information. | | Content-Type | header | string | If the operation succeeds, this value is the MIME type of the list response. The MIME type is determined by the listing format specified by the request and will be one of text/plain, application/json, application/xml, or text/xml. If the operation fails, this value is the MIME type of the error text in the response body. | | count | body | integer | The number of objects in the" }, { "data": "| | bytes | body | integer | The total number of bytes that are stored in Object Storage for the account. | | name | body | string | The name of the container. | Name In Type Description Content-Length header string If the operation succeeds, the length of the response body in bytes. On error, this is the length of the error text. X-Account-Meta-name (Optional) header string The custom account metadata item, where name is the name of the metadata item. One X-Account-Meta-name response header appears for each metadata item (for each name). X-Account-Meta-Temp-URL-Key (Optional) header string The secret key value for temporary URLs. If not set, this header is not returned in the response. X-Account-Meta-Temp-URL-Key-2 (Optional) header string The second secret key value for temporary URLs. If not set, this header is not returned in the response. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Account-Bytes-Used header integer The total number of bytes that are stored in Object Storage for the account. X-Account-Container-Count header integer The number of containers. X-Account-Object-Count header integer The number of objects in the account. X-Account-Storage-Policy-name-Bytes-Used header integer The total number of bytes that are stored in in a given storage policy, where name is the name of the storage policy. X-Account-Storage-Policy-name-Container-Count header integer The number of containers in the account that use the given storage policy where name is the name of the storage policy. X-Account-Storage-Policy-name-Object-Count header integer The number of objects in given storage policy where name is the name of the storage policy. X-Account-Meta-Quota-Bytes (Optional) header string If present, this is the limit on the total size in bytes of objects stored in the account. Typically this value is set by an administrator. X-Account-Access-Control (Optional) header string Note: X-Account-Access-Control is not supported by Keystone auth. The account access control list (ACL) that grants access to containers and objects in the account. If there is no ACL, this header is not returned by this operation. See Account ACLs for more information. Content-Type header string If the operation succeeds, this value is the MIME type of the list response. The MIME type is determined by the listing format specified by the request and will be one of text/plain, application/json, application/xml, or text/xml. If the operation fails, this value is the MIME type of the error text in the response body. count body integer The number of objects in the container. bytes body integer The total number of bytes that are stored in Object Storage for the account. name body string The name of the container. Create, update, or delete account metadata Creates, updates, or deletes account metadata. To create, update, or delete custom metadata, use the X-Account-Meta-{name} request header, where {name} is the name of the metadata item. Account metadata operations work differently than how object metadata operations" }, { "data": "Depending on the contents of your POST account metadata request, the Object Storage API updates the metadata as shown in the following table: Account metadata operations | 0 | 1 | |:--|:--| | POST request header contains | Result | | A metadata key without a value. The metadata key already exists for the account. | The API removes the metadata item from the account. | | A metadata key without a value. The metadata key does not already exist for the account. | The API ignores the metadata key. | | A metadata key value. The metadata key already exists for the account. | The API updates the metadata key value for the account. | | A metadata key value. The metadata key does not already exist for the account. | The API adds the metadata key and value pair, or item, to the account. | | One or more account metadata items are omitted. The metadata items already exist for the account. | The API does not change the existing metadata items. | POST request header contains Result A metadata key without a value. The metadata key already exists for the account. The API removes the metadata item from the account. A metadata key without a value. The metadata key does not already exist for the account. The API ignores the metadata key. A metadata key value. The metadata key already exists for the account. The API updates the metadata key value for the account. A metadata key value. The metadata key does not already exist for the account. The API adds the metadata key and value pair, or item, to the account. One or more account metadata items are omitted. The metadata items already exist for the account. The API does not change the existing metadata items. To delete a metadata header, send an empty value for that header, such as for the X-Account-Meta-Book header. If the tool you use to communicate with Object Storage, such as an older version of cURL, does not support empty headers, send the X-Remove-Account- Meta-{name} header with an arbitrary value. For example, X-Remove-Account-Meta-Book: x. The operation ignores the arbitrary value. Note Metadata keys (the name of the metadata) must be treated as case-insensitive at all times. These keys can contain ASCII 7-bit characters that are not control (0-31) characters, DEL, or a separator character, according to HTTP/1.1 . The underscore character is silently converted to a hyphen. Note The metadata value must be UTF-8-encoded and then URL-encoded before you include it in the header. This is a direct violation of the HTTP/1.1 basic rules. Subsequent requests for the same key and value pair overwrite the existing value. If the container already has other custom metadata items, a request to create, update, or delete metadata does not affect those items. This operation does not accept a request body. Example requests and responses: Create account metadata: ``` curl -i $publicURL -X POST -H \"X-Auth-Token: $token\" -H \"X-Account-Meta-Book: MobyDick\" -H \"X-Account-Meta-Subject: Literature\" ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx8c2dd6aee35442a4a5646-0052d954fb X-Openstack-Request-Id: tx8c2dd6aee35442a4a5646-0052d954fb Date: Fri, 17 Jan 2014 16:06:19 GMT ``` Update account metadata: ``` curl -i $publicURL -X POST -H \"X-Auth-Token: $token\" -H \"X-Account-Meta-Subject: AmericanLiterature\" ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx1439b96137364ab581156-0052d95532 X-Openstack-Request-Id: tx1439b96137364ab581156-0052d95532 Date: Fri, 17 Jan 2014 16:07:14 GMT ``` Delete account metadata: ``` curl -i $publicURL -X POST -H \"X-Auth-Token: $token\" -H \"X-Remove-Account-Meta-Subject: x\" ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx411cf57701424da99948a-0052d9556f X-Openstack-Request-Id: tx411cf57701424da99948a-0052d9556f Date: Fri, 17 Jan 2014 16:08:15 GMT ``` If the request succeeds, the operation returns the No Content (204) response" }, { "data": "To confirm your changes, issue a show account metadata request. Error response codes:204, | Name | In | Type | Description | |:--|:-|:-|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Account-Meta-Temp-URL-Key (Optional) | header | string | The secret key value for temporary URLs. | | X-Account-Meta-Temp-URL-Key-2 (Optional) | header | string | A second secret key value for temporary URLs. The second key enables you to rotate keys by having two active keys at the same time. | | X-Account-Meta-name (Optional) | header | string | The account metadata. The name is the name of metadata item that you want to add, update, or delete. To delete this item, send an empty value in this header. You must specify an X-Account-Meta-name header for each metadata item (for each name) that you want to add, update, or delete. | | X-Remove-Account-name (Optional) | header | string | Removes the metadata item named name. For example, X-Remove-Account-Meta-Blue removes custom metadata. | | X-Account-Access-Control (Optional) | header | string | Note: X-Account-Access-Control is not supported by Keystone auth. Sets an account access control list (ACL) that grants access to containers and objects in the account. See Account ACLs for more information. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Account-Meta-Temp-URL-Key (Optional) header string The secret key value for temporary URLs. X-Account-Meta-Temp-URL-Key-2 (Optional) header string A second secret key value for temporary URLs. The second key enables you to rotate keys by having two active keys at the same time. X-Account-Meta-name (Optional) header string The account metadata. The name is the name of metadata item that you want to add, update, or delete. To delete this item, send an empty value in this" }, { "data": "You must specify an X-Account-Meta-name header for each metadata item (for each name) that you want to add, update, or delete. X-Remove-Account-name (Optional) header string Removes the metadata item named name. For example, X-Remove-Account-Meta-Blue removes custom metadata. X-Account-Access-Control (Optional) header string Note: X-Account-Access-Control is not supported by Keystone auth. Sets an account access control list (ACL) that grants access to containers and objects in the account. See Account ACLs for more information. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:|:-|:--|:| | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | Content-Type (Optional) | header | string | If present, this value is the MIME type of the informational or error text in the response body. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | Name In Type Description Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. Content-Type (Optional) header string If present, this value is the MIME type of the informational or error text in the response body. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a" }, { "data": "(same as X-Trans-Id) Show account metadata Shows metadata for an account. Metadata for the account includes: Number of containers Number of objects Total number of bytes that are stored in Object Storage for the account Because the storage system can store large amounts of data, take care when you represent the total bytes response as an integer; when possible, convert it to a 64-bit unsigned integer if your platform supports that primitive type. Do not include metadata headers in this request. Show account metadata request: ``` curl -i $publicURL -X HEAD -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 204 No Content Content-Length: 0 X-Account-Object-Count: 1 X-Account-Meta-Book: MobyDick X-Timestamp: 1389453423.35964 X-Account-Bytes-Used: 14 X-Account-Container-Count: 2 Content-Type: text/plain; charset=utf-8 Accept-Ranges: bytes X-Trans-Id: txafb3504870144b8ca40f7-0052d955d4 X-Openstack-Request-Id: txafb3504870144b8ca40f7-0052d955d4 Date: Fri, 17 Jan 2014 16:09:56 GMT ``` If the account or authentication token is not valid, the operation returns the Unauthorized (401) response code. Error response codes:204,401, | Name | In | Type | Description | |:-|:-|:--|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Newest (Optional) | header | boolean | If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Newest (Optional) header boolean If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. X-Trans-Id-Extra (Optional) header string Extra transaction" }, { "data": "Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:-|:-|:--|:-| | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | X-Account-Meta-name (Optional) | header | string | The custom account metadata item, where name is the name of the metadata item. One X-Account-Meta-name response header appears for each metadata item (for each name). | | X-Account-Meta-Temp-URL-Key (Optional) | header | string | The secret key value for temporary URLs. If not set, this header is not returned in the response. | | X-Account-Meta-Temp-URL-Key-2 (Optional) | header | string | The second secret key value for temporary URLs. If not set, this header is not returned in the response. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Account-Bytes-Used | header | integer | The total number of bytes that are stored in Object Storage for the account. | | X-Account-Object-Count | header | integer | The number of objects in the account. | | X-Account-Container-Count | header | integer | The number of containers. | | X-Account-Storage-Policy-name-Bytes-Used | header | integer | The total number of bytes that are stored in in a given storage policy, where name is the name of the storage policy. | | X-Account-Storage-Policy-name-Container-Count | header | integer | The number of containers in the account that use the given storage policy where name is the name of the storage policy. | | X-Account-Storage-Policy-name-Object-Count | header | integer | The number of objects in given storage policy where name is the name of the storage policy. | | X-Account-Meta-Quota-Bytes (Optional) | header | string | If present, this is the limit on the total size in bytes of objects stored in the account. Typically this value is set by an" }, { "data": "| | X-Account-Access-Control (Optional) | header | string | Note: X-Account-Access-Control is not supported by Keystone auth. The account access control list (ACL) that grants access to containers and objects in the account. If there is no ACL, this header is not returned by this operation. See Account ACLs for more information. | | Content-Type (Optional) | header | string | If present, this value is the MIME type of the informational or error text in the response body. | Name In Type Description Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. X-Account-Meta-name (Optional) header string The custom account metadata item, where name is the name of the metadata item. One X-Account-Meta-name response header appears for each metadata item (for each name). X-Account-Meta-Temp-URL-Key (Optional) header string The secret key value for temporary URLs. If not set, this header is not returned in the response. X-Account-Meta-Temp-URL-Key-2 (Optional) header string The second secret key value for temporary URLs. If not set, this header is not returned in the response. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Account-Bytes-Used header integer The total number of bytes that are stored in Object Storage for the account. X-Account-Object-Count header integer The number of objects in the account. X-Account-Container-Count header integer The number of containers. X-Account-Storage-Policy-name-Bytes-Used header integer The total number of bytes that are stored in in a given storage policy, where name is the name of the storage policy. X-Account-Storage-Policy-name-Container-Count header integer The number of containers in the account that use the given storage policy where name is the name of the storage policy. X-Account-Storage-Policy-name-Object-Count header integer The number of objects in given storage policy where name is the name of the storage policy. X-Account-Meta-Quota-Bytes (Optional) header string If present, this is the limit on the total size in bytes of objects stored in the account. Typically this value is set by an administrator. X-Account-Access-Control (Optional) header string Note: X-Account-Access-Control is not supported by Keystone auth. The account access control list (ACL) that grants access to containers and objects in the account. If there is no ACL, this header is not returned by this operation. See Account ACLs for more information. Content-Type (Optional) header string If present, this value is the MIME type of the informational or error text in the response body. Delete the specified account Deletes the specified account when a reseller admin issues this request. Accounts are only deleted by (1) having a reseller admin level auth token (2) sending a DELETE to a proxy server for the account to be deleted and (3) that proxy server having the allowaccountmanagement config option set to true. Note that an issuing a DELETE request simply marks the account for deletion later as outlined in the link:" }, { "data": "Take care when performing this operation because deleting an account is a one-way operation that is not trivially recoverable. Its crucial to note that in an OpenStack context, you should delete an account after the project/tenant has been deleted from Keystone. ``` curl -i $publicURL -X DELETE -H 'X-Auth-Token: $<reseller admin token>' ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Account-Status: Deleted X-Trans-Id: tx91ce60a640cc42eca198a-006128c180 X-Openstack-Request-Id: tx91ce60a640cc42eca198a-006128c180 Date: Fri, 27 Aug 2021 11:42:08 GMT ``` If the account or authentication token is not valid, the operation returns the Unauthorized (401). If you try to delete an account with a non-admin token, a 403 Forbidden response code is returned. If you give a non-existent account or an invalid URL, a 404 Not Found response code is returned. Error response codes:204,401,403,404. | Name | In | Type | Description | |:|:-|:-|:--| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | Name | In | Type | Description | |:|:-|:--|:| | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | Content-Type (Optional) | header | string | If present, this value is the MIME type of the informational or error text in the response body. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | Name In Type Description Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. Content-Type (Optional) header string If present, this value is the MIME type of the informational or error text in the response body. X-Trans-Id header string A unique transaction ID for this" }, { "data": "Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Lists objects in a container. Creates, shows details for, and deletes containers. Creates, updates, shows, and deletes container metadata. For more information and concepts about containers see Object Storage API overview. Show container details and list objects Shows details for a container and lists objects, sorted by name, in the container. Specify query parameters in the request to filter the list and return a subset of objects. Omit query parameters to return a list of objects that are stored in the container, up to 10,000 names. The 10,000 maximum value is configurable. To view the value for the cluster, issue a GET /info request. Example requests and responses: OK (200). Success. The response body lists the objects. No Content (204). Success. The response body shows no objects. Either the container has no objects or you are paging through a long list of objects by using the marker, limit, or end_marker query parameter and you have reached the end of the list. If the container does not exist, the call returns the Not Found (404) response code. Normal response codes: 200, 204 Error response codes: 404 | Name | In | Type | Description | |:-|:-|:--|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | limit (Optional) | query | integer | For an integer value n , limits the number of results to n . | | marker (Optional) | query | string | For a string value, x , constrains the list to items whose names are greater than x. | | end_marker (Optional) | query | string | For a string value, x , constrains the list to items whose names are less than x. | | prefix (Optional) | query | string | Only objects with this prefix will be returned. When combined with a delimiter query, this enables API users to simulate and traverse the objects in a container as if they were in a directory tree. | | format (Optional) | query | string | The response format. Valid values are json, xml, or plain. The default is plain. If you append the format=xml or format=json query parameter to the storage account URL, the response shows extended container information serialized in that format. If you append the format=plain query parameter, the response lists the container names separated by newlines. | | delimiter (Optional) | query | string | The delimiter is a single character used to split object names to present a pseudo-directory hierarchy of objects. When combined with a prefix query, this enables API users to simulate and traverse the objects in a container as if they were in a directory tree. | | path (Optional) | query | string | For a string value, returns the object names that are nested in the pseudo path. Please use prefix/delimiter queries instead of using this path" }, { "data": "| | reverse (Optional) | query | boolean | By default, listings are returned sorted by name, ascending. If you include the reverse=true query parameter, the listing will be returned sorted by name, descending. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Newest (Optional) | header | boolean | If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. | | Accept (Optional) | header | string | Instead of using the format query parameter, set this header to application/json, application/xml, or text/xml. | | X-Container-Meta-Temp-URL-Key (Optional) | header | string | The secret key value for temporary URLs. | | X-Container-Meta-Temp-URL-Key-2 (Optional) | header | string | A second secret key value for temporary URLs. The second key enables you to rotate keys by having two active keys at the same time. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | | X-Storage-Policy (Optional) | header | string | In requests, specifies the name of the storage policy to use for the container. In responses, is the storage policy name. The storage policy of the container cannot be changed. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. limit (Optional) query integer For an integer value n , limits the number of results to n . marker (Optional) query string For a string value, x , constrains the list to items whose names are greater than x. end_marker (Optional) query string For a string value, x , constrains the list to items whose names are less than x. prefix (Optional) query string Only objects with this prefix will be" }, { "data": "When combined with a delimiter query, this enables API users to simulate and traverse the objects in a container as if they were in a directory tree. format (Optional) query string The response format. Valid values are json, xml, or plain. The default is plain. If you append the format=xml or format=json query parameter to the storage account URL, the response shows extended container information serialized in that format. If you append the format=plain query parameter, the response lists the container names separated by newlines. delimiter (Optional) query string The delimiter is a single character used to split object names to present a pseudo-directory hierarchy of objects. When combined with a prefix query, this enables API users to simulate and traverse the objects in a container as if they were in a directory tree. path (Optional) query string For a string value, returns the object names that are nested in the pseudo path. Please use prefix/delimiter queries instead of using this path query. reverse (Optional) query boolean By default, listings are returned sorted by name, ascending. If you include the reverse=true query parameter, the listing will be returned sorted by name, descending. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Newest (Optional) header boolean If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. Accept (Optional) header string Instead of using the format query parameter, set this header to application/json, application/xml, or text/xml. X-Container-Meta-Temp-URL-Key (Optional) header string The secret key value for temporary URLs. X-Container-Meta-Temp-URL-Key-2 (Optional) header string A second secret key value for temporary URLs. The second key enables you to rotate keys by having two active keys at the same time. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. X-Storage-Policy (Optional) header string In requests, specifies the name of the storage policy to use for the container. In responses, is the storage policy name. The storage policy of the container cannot be changed. | Name | In | Type | Description | |:-|:-|:--|:--| | X-Container-Meta-name | header | string | The custom container metadata item, where name is the name of the metadata item. One X-Container-Meta-name response header appears for each metadata item (for each" }, { "data": "| | Content-Length | header | string | If the operation succeeds, the length of the response body in bytes. On error, this is the length of the error text. | | X-Container-Object-Count | header | integer | The number of objects. | | X-Container-Bytes-Used | header | integer | The total number of bytes used. | | Accept-Ranges | header | string | The type of ranges that the object accepts. | | X-Container-Meta-Temp-URL-Key (Optional) | header | string | The secret key value for temporary URLs. If not set, this header is not returned in the response. | | X-Container-Meta-Temp-URL-Key-2 (Optional) | header | string | The second secret key value for temporary URLs. If not set, this header is not returned in the response. | | X-Container-Meta-Quota-Count (Optional) | header | string | The maximum object count of the container. If not set, this header is not returned by this operation. | | X-Container-Meta-Quota-Bytes (Optional) | header | string | The maximum size of the container, in bytes. If not set, this header is not returned by this operation. | | X-Storage-Policy (Optional) | header | string | In requests, specifies the name of the storage policy to use for the container. In responses, is the storage policy name. The storage policy of the container cannot be changed. | | X-Container-Read (Optional) | header | string | The ACL that grants read access. If there is no ACL, this header is not returned by this operation. See Container ACLs for more information. | | X-Container-Write (Optional) | header | string | The ACL that grants write access. If there is no ACL, this header is not returned by this operation. See Container ACLs for more information. | | X-Container-Sync-Key (Optional) | header | string | The secret key for container synchronization. If not set, this header is not returned by this operation. | | X-Container-Sync-To (Optional) | header | string | The destination for container synchronization. If not set, this header is not returned by this operation. | | X-Versions-Location (Optional) | header | string | If present, this container has versioning enabled and the value is the UTF-8 encoded name of another container. For more information about object versioning, see Object versioning. | | X-History-Location (Optional) | header | string | If present, this container has versioning enabled and the value is the UTF-8 encoded name of another container. For more information about object versioning, see Object versioning. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | | Content-Type | header | string | If the operation succeeds, this value is the MIME type of the list response. The MIME type is determined by the listing format specified by the request and will be one of text/plain, application/json, application/xml, or text/xml. If the operation fails, this value is the MIME type of the error text in the response" }, { "data": "| | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | hash | body | string | The MD5 checksum value of the object content. | | last_modified | body | string | The date and time when the object was last modified. The date and time stamp format is ISO 8601: CCYY-MM-DDThh:mm:sshh:mm For example, 2015-08-27T09:49:58-05:00. The hh:mm value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is -05:00. | | content_type | body | string | The content type of the object. | | bytes | body | integer | The total number of bytes that are stored in Object Storage for the container. | | name | body | string | The name of the object. | | symlink_path | body | string | This field exists only when the object is symlink. This is the target path of the symlink object. | Name In Type Description X-Container-Meta-name header string The custom container metadata item, where name is the name of the metadata item. One X-Container-Meta-name response header appears for each metadata item (for each name). Content-Length header string If the operation succeeds, the length of the response body in bytes. On error, this is the length of the error text. X-Container-Object-Count header integer The number of objects. X-Container-Bytes-Used header integer The total number of bytes used. Accept-Ranges header string The type of ranges that the object accepts. X-Container-Meta-Temp-URL-Key (Optional) header string The secret key value for temporary URLs. If not set, this header is not returned in the response. X-Container-Meta-Temp-URL-Key-2 (Optional) header string The second secret key value for temporary URLs. If not set, this header is not returned in the response. X-Container-Meta-Quota-Count (Optional) header string The maximum object count of the container. If not set, this header is not returned by this operation. X-Container-Meta-Quota-Bytes (Optional) header string The maximum size of the container, in bytes. If not set, this header is not returned by this operation. X-Storage-Policy (Optional) header string In requests, specifies the name of the storage policy to use for the container. In responses, is the storage policy name. The storage policy of the container cannot be changed. X-Container-Read (Optional) header string The ACL that grants read access. If there is no ACL, this header is not returned by this operation. See Container ACLs for more information. X-Container-Write (Optional) header string The ACL that grants write access. If there is no ACL, this header is not returned by this operation. See Container ACLs for more information. X-Container-Sync-Key (Optional) header string The secret key for container synchronization. If not set, this header is not returned by this operation. X-Container-Sync-To (Optional) header string The destination for container synchronization. If not set, this header is not returned by this operation. X-Versions-Location (Optional) header string If present, this container has versioning enabled and the value is the UTF-8 encoded name of another container. For more information about object versioning, see Object versioning. X-History-Location (Optional) header string If present, this container has versioning enabled and the value is the UTF-8 encoded name of another container. For more information about object versioning, see Object versioning. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a" }, { "data": "X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Content-Type header string If the operation succeeds, this value is the MIME type of the list response. The MIME type is determined by the listing format specified by the request and will be one of text/plain, application/json, application/xml, or text/xml. If the operation fails, this value is the MIME type of the error text in the response body. Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. hash body string The MD5 checksum value of the object content. last_modified body string The date and time when the object was last modified. The date and time stamp format is ISO 8601: ``` CCYY-MM-DDThh:mm:sshh:mm ``` For example, 2015-08-27T09:49:58-05:00. The hh:mm value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is -05:00. content_type body string The content type of the object. bytes body integer The total number of bytes that are stored in Object Storage for the container. name body string The name of the object. symlink_path body string This field exists only when the object is symlink. This is the target path of the symlink object. ``` HTTP/1.1 200 OK Content-Length: 341 X-Container-Object-Count: 2 Accept-Ranges: bytes X-Container-Meta-Book: TomSawyer X-Timestamp: 1389727543.65372 X-Container-Bytes-Used: 26 Content-Type: application/json; charset=utf-8 X-Trans-Id: tx26377fe5fab74869825d1-0052d6bdff X-Openstack-Request-Id: tx26377fe5fab74869825d1-0052d6bdff Date: Wed, 15 Jan 2014 16:57:35 GMT ``` ``` [ { \"hash\": \"451e372e48e0f6b1114fa0724aa79fa1\", \"last_modified\": \"2014-01-15T16:41:49.390270\", \"bytes\": 14, \"name\": \"goodbye\", \"content_type\": \"application/octet-stream\" }, { \"hash\": \"ed076287532e86365e841e92bfc50d8c\", \"last_modified\": \"2014-01-15T16:37:43.427570\", \"bytes\": 12, \"name\": \"helloworld\", \"content_type\": \"application/octet-stream\" } ] ``` ``` HTTP/1.1 200 OK Content-Length: 500 X-Container-Object-Count: 2 Accept-Ranges: bytes X-Container-Meta-Book: TomSawyer X-Timestamp: 1389727543.65372 X-Container-Bytes-Used: 26 Content-Type: application/xml; charset=utf-8 X-Trans-Id: txc75ea9a6e66f47d79e0c5-0052d6be76 X-Openstack-Request-Id: txc75ea9a6e66f47d79e0c5-0052d6be76 Date: Wed, 15 Jan 2014 16:59:35 GMT ``` ``` <?xml version=\"1.0\" encoding=\"UTF-8\"?> <container name=\"marktwain\"> <object> <name>goodbye</name> <hash>451e372e48e0f6b1114fa0724aa79fa1</hash> <bytes>14</bytes> <contenttype>application/octet-stream</contenttype> <lastmodified>2014-01-15T16:41:49.390270</lastmodified> </object> <object> <name>helloworld</name> <hash>ed076287532e86365e841e92bfc50d8c</hash> <bytes>12</bytes> <contenttype>application/octet-stream</contenttype> <lastmodified>2014-01-15T16:37:43.427570</lastmodified> </object> </container> ``` Create container Creates a container. You do not need to check whether a container already exists before issuing a PUT operation because the operation is idempotent: It creates a container or updates an existing container, as appropriate. To create, update, or delete a custom metadata item, use the X -Container-Meta-{name} header, where {name} is the name of the metadata item. Note Metadata keys (the name of the metadata) must be treated as case-insensitive at all times. These keys can contain ASCII 7-bit characters that are not control (0-31) characters, DEL, or a separator character, according to HTTP/1.1 . The underscore character is silently converted to a hyphen. Note The metadata value must be UTF-8-encoded and then URL-encoded before you include it in the header. This is a direct violation of the HTTP/1.1 basic rules. Example requests and responses: Create a container with no metadata: ``` curl -i $publicURL/steven -X PUT -H \"Content-Length: 0\" -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 201 Created Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx7f6b7fa09bc2443a94df0-0052d58b56 X-Openstack-Request-Id: tx7f6b7fa09bc2443a94df0-0052d58b56 Date: Tue, 14 Jan 2014 19:09:10 GMT ``` Create a container with metadata: ``` curl -i $publicURL/marktwain -X PUT -H \"X-Auth-Token: $token\" -H \"X-Container-Meta-Book: TomSawyer\" ``` ``` HTTP/1.1 201 Created Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx06021f10fc8642b2901e7-0052d58f37 X-Openstack-Request-Id: tx06021f10fc8642b2901e7-0052d58f37 Date: Tue, 14 Jan 2014 19:25:43 GMT ``` Create a container with an ACL to allow anybody to get an object in the marktwain container: ``` curl -i $publicURL/marktwain -X PUT -H \"X-Auth-Token: $token\" -H \"X-Container-Read: .r:*\" ``` ```" }, { "data": "201 Created Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx06021f10fc8642b2901e7-0052d58f37 X-Openstack-Request-Id: tx06021f10fc8642b2901e7-0052d58f37 Date: Tue, 14 Jan 2014 19:25:43 GMT ``` Normal response codes: 201, 202 Error response codes: 400, 404, 507 | Name | In | Type | Description | |:-|:-|:-|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Container-Read (Optional) | header | string | Sets a container access control list (ACL) that grants read access. The scope of the access is specific to the container. The ACL grants the ability to perform GET or HEAD operations on objects in the container or to perform a GET or HEAD operation on the container itself. The format and scope of the ACL is dependent on the authorization system used by the Object Storage service. See Container ACLs for more information. | | X-Container-Write (Optional) | header | string | Sets a container access control list (ACL) that grants write access. The scope of the access is specific to the container. The ACL grants the ability to perform PUT, POST and DELETE operations on objects in the container. It does not grant write access to the container metadata. The format of the ACL is dependent on the authorization system used by the Object Storage service. See Container ACLs for more information. | | X-Container-Sync-To (Optional) | header | string | Sets the destination for container synchronization. Used with the secret key indicated in the X -Container-Sync-Key header. If you want to stop a container from synchronizing, send a blank value for the X-Container-Sync-Key header. | | X-Container-Sync-Key (Optional) | header | string | Sets the secret key for container synchronization. If you remove the secret key, synchronization is halted. For more information, see Container to Container Synchronization | | X-Versions-Location (Optional) | header | string | The URL-encoded UTF-8 representation of the container that stores previous versions of objects. If neither this nor X-History-Location is set, versioning is disabled for this container. X-Versions-Location and X-History-Location cannot both be set at the same time. For more information about object versioning, see Object versioning. | | X-History-Location (Optional) | header | string | The URL-encoded UTF-8 representation of the container that stores previous versions of objects. If neither this nor X-Versions-Location is set, versioning is disabled for this container. X-History-Location and X-Versions-Location cannot both be set at the same time. For more information about object versioning, see Object versioning. | | X-Container-Meta-name (Optional) | header | string | The container metadata, where name is the name of metadata item. You must specify an X-Container-Meta-name header for each metadata item (for each name) that you want to add or" }, { "data": "| | X-Container-Meta-Access-Control-Allow-Origin (Optional) | header | string | Originating URLs allowed to make cross-origin requests (CORS), separated by spaces. This heading applies to the container only, and all objects within the container with this header applied are CORS-enabled for the allowed origin URLs. A browser (user-agent) typically issues a preflighted request , which is an OPTIONS call that verifies the origin is allowed to make the request. The Object Storage service returns 200 if the originating URL is listed in this header parameter, and issues a 401 if the originating URL is not allowed to make a cross-origin request. Once a 200 is returned, the browser makes a second request to the Object Storage service to retrieve the CORS-enabled object. | | X-Container-Meta-Access-Control-Max-Age (Optional) | header | string | Maximum time for the origin to hold the preflight results. A browser may make an OPTIONS call to verify the origin is allowed to make the request. Set the value to an integer number of seconds after the time that the request was received. | | X-Container-Meta-Access-Control-Expose-Headers (Optional) | header | string | Headers the Object Storage service exposes to the browser (technically, through the user-agent setting), in the request response, separated by spaces. By default the Object Storage service returns the following headers: All simple response headers as listed on http://www.w3.org/TR/cors/#simple-response-header. The headers etag, x-timestamp, x-trans-id, x-openstack-request-id. All metadata headers (X-Container-Meta- for containers and X-Object-Meta- for objects). headers listed in X-Container-Meta-Access-Control-Expose-Headers. | | X-Container-Meta-Quota-Bytes (Optional) | header | string | Sets maximum size of the container, in bytes. Typically these values are set by an administrator. Returns a 413 response (request entity too large) when an object PUT operation exceeds this quota value. This value does not take effect immediately. see Container Quotas for more information. | | X-Container-Meta-Quota-Count (Optional) | header | string | Sets maximum object count of the container. Typically these values are set by an administrator. Returns a 413 response (request entity too large) when an object PUT operation exceeds this quota value. This value does not take effect immediately. see Container Quotas for more information. | | X-Container-Meta-Temp-URL-Key (Optional) | header | string | The secret key value for temporary URLs. | | X-Container-Meta-Temp-URL-Key-2 (Optional) | header | string | A second secret key value for temporary URLs. The second key enables you to rotate keys by having two active keys at the same time. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | | X-Storage-Policy (Optional) | header | string | In requests, specifies the name of the storage policy to use for the container. In responses, is the storage policy name. The storage policy of the container cannot be" }, { "data": "| Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Container-Read (Optional) header string Sets a container access control list (ACL) that grants read access. The scope of the access is specific to the container. The ACL grants the ability to perform GET or HEAD operations on objects in the container or to perform a GET or HEAD operation on the container itself. The format and scope of the ACL is dependent on the authorization system used by the Object Storage service. See Container ACLs for more information. X-Container-Write (Optional) header string Sets a container access control list (ACL) that grants write access. The scope of the access is specific to the container. The ACL grants the ability to perform PUT, POST and DELETE operations on objects in the container. It does not grant write access to the container metadata. The format of the ACL is dependent on the authorization system used by the Object Storage service. See Container ACLs for more information. X-Container-Sync-To (Optional) header string Sets the destination for container synchronization. Used with the secret key indicated in the X -Container-Sync-Key header. If you want to stop a container from synchronizing, send a blank value for the X-Container-Sync-Key header. X-Container-Sync-Key (Optional) header string Sets the secret key for container synchronization. If you remove the secret key, synchronization is halted. For more information, see Container to Container Synchronization X-Versions-Location (Optional) header string The URL-encoded UTF-8 representation of the container that stores previous versions of objects. If neither this nor X-History-Location is set, versioning is disabled for this container. X-Versions-Location and X-History-Location cannot both be set at the same time. For more information about object versioning, see Object versioning. X-History-Location (Optional) header string The URL-encoded UTF-8 representation of the container that stores previous versions of objects. If neither this nor X-Versions-Location is set, versioning is disabled for this container. X-History-Location and X-Versions-Location cannot both be set at the same time. For more information about object versioning, see Object versioning. X-Container-Meta-name (Optional) header string The container metadata, where name is the name of metadata item. You must specify an X-Container-Meta-name header for each metadata item (for each name) that you want to add or update. X-Container-Meta-Access-Control-Allow-Origin (Optional) header string Originating URLs allowed to make cross-origin requests (CORS), separated by spaces. This heading applies to the container only, and all objects within the container with this header applied are CORS-enabled for the allowed origin URLs. A browser (user-agent) typically issues a preflighted request , which is an OPTIONS call that verifies the origin is allowed to make the request. The Object Storage service returns 200 if the originating URL is listed in this header parameter, and issues a 401 if the originating URL is not allowed to make a cross-origin" }, { "data": "Once a 200 is returned, the browser makes a second request to the Object Storage service to retrieve the CORS-enabled object. X-Container-Meta-Access-Control-Max-Age (Optional) header string Maximum time for the origin to hold the preflight results. A browser may make an OPTIONS call to verify the origin is allowed to make the request. Set the value to an integer number of seconds after the time that the request was received. X-Container-Meta-Access-Control-Expose-Headers (Optional) header string Headers the Object Storage service exposes to the browser (technically, through the user-agent setting), in the request response, separated by spaces. By default the Object Storage service returns the following headers: All simple response headers as listed on http://www.w3.org/TR/cors/#simple-response-header. The headers etag, x-timestamp, x-trans-id, x-openstack-request-id. All metadata headers (X-Container-Meta-* for containers and X-Object-Meta-* for objects). headers listed in X-Container-Meta-Access-Control-Expose-Headers. X-Container-Meta-Quota-Bytes (Optional) header string Sets maximum size of the container, in bytes. Typically these values are set by an administrator. Returns a 413 response (request entity too large) when an object PUT operation exceeds this quota value. This value does not take effect immediately. see Container Quotas for more information. X-Container-Meta-Quota-Count (Optional) header string Sets maximum object count of the container. Typically these values are set by an administrator. Returns a 413 response (request entity too large) when an object PUT operation exceeds this quota value. This value does not take effect immediately. see Container Quotas for more information. X-Container-Meta-Temp-URL-Key (Optional) header string The secret key value for temporary URLs. X-Container-Meta-Temp-URL-Key-2 (Optional) header string A second secret key value for temporary URLs. The second key enables you to rotate keys by having two active keys at the same time. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. X-Storage-Policy (Optional) header string In requests, specifies the name of the storage policy to use for the container. In responses, is the storage policy name. The storage policy of the container cannot be changed. | Name | In | Type | Description | |:|:-|:--|:| | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response" }, { "data": "| | Content-Type (Optional) | header | string | If present, this value is the MIME type of the informational or error text in the response body. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | Name In Type Description Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. Content-Type (Optional) header string If present, this value is the MIME type of the informational or error text in the response body. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Create, update, or delete container metadata Creates, updates, or deletes custom metadata for a container. To create, update, or delete a custom metadata item, use the X -Container-Meta-{name} header, where {name} is the name of the metadata item. Note Metadata keys (the name of the metadata) must be treated as case-insensitive at all times. These keys can contain ASCII 7-bit characters that are not control (0-31) characters, DEL, or a separator character, according to HTTP/1.1 . The underscore character is silently converted to a hyphen. Note The metadata value must be UTF-8-encoded and then URL-encoded before you include it in the header. This is a direct violation of the HTTP/1.1 basic rules. Subsequent requests for the same key and value pair overwrite the previous value. To delete container metadata, send an empty value for that header, such as for the X-Container-Meta-Book header. If the tool you use to communicate with Object Storage, such as an older version of cURL, does not support empty headers, send the X-Remove- Container-Meta-{name} header with an arbitrary value. For example, X-Remove-Container-Meta-Book: x. The operation ignores the arbitrary value. If the container already has other custom metadata items, a request to create, update, or delete metadata does not affect those items. Example requests and responses: Create container metadata: ``` curl -i $publicURL/marktwain -X POST -H \"X-Auth-Token: $token\" -H \"X-Container-Meta-Author: MarkTwain\" -H \"X-Container-Meta-Web-Directory-Type: text/directory\" -H \"X-Container-Meta-Century: Nineteenth\" ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx05dbd434c651429193139-0052d82635 X-Openstack-Request-Id: tx05dbd434c651429193139-0052d82635 Date: Thu, 16 Jan 2014 18:34:29 GMT ``` Update container metadata: ``` curl -i $publicURL/marktwain -X POST -H \"X-Auth-Token: $token\" -H \"X-Container-Meta-Author: SamuelClemens\" ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txe60c7314bf614bb39dfe4-0052d82653 X-Openstack-Request-Id: txe60c7314bf614bb39dfe4-0052d82653 Date: Thu, 16 Jan 2014 18:34:59 GMT ``` Delete container metadata: ``` curl -i $publicURL/marktwain -X POST -H \"X-Auth-Token: $token\" -H \"X-Remove-Container-Meta-Century: x\" ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx7997e18da2a34a9e84ceb-0052d826d0 X-Openstack-Request-Id: tx7997e18da2a34a9e84ceb-0052d826d0 Date: Thu, 16 Jan 2014 18:37:04 GMT ``` If the request succeeds, the operation returns the No Content (204) response" }, { "data": "To confirm your changes, issue a show container metadata request. Normal response codes: 204 Error response codes: 404 | Name | In | Type | Description | |:-|:-|:-|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Container-Read (Optional) | header | string | Sets a container access control list (ACL) that grants read access. The scope of the access is specific to the container. The ACL grants the ability to perform GET or HEAD operations on objects in the container or to perform a GET or HEAD operation on the container itself. The format and scope of the ACL is dependent on the authorization system used by the Object Storage service. See Container ACLs for more information. | | X-Remove-Container-name (Optional) | header | string | Removes the metadata item named name. For example, X-Remove-Container-Read removes the X-Container-Read metadata item and X-Remove-Container-Meta-Blue removes custom metadata. | | X-Container-Write (Optional) | header | string | Sets a container access control list (ACL) that grants write access. The scope of the access is specific to the container. The ACL grants the ability to perform PUT, POST and DELETE operations on objects in the container. It does not grant write access to the container metadata. The format of the ACL is dependent on the authorization system used by the Object Storage service. See Container ACLs for more information. | | X-Container-Sync-To (Optional) | header | string | Sets the destination for container synchronization. Used with the secret key indicated in the X -Container-Sync-Key header. If you want to stop a container from synchronizing, send a blank value for the X-Container-Sync-Key header. | | X-Container-Sync-Key (Optional) | header | string | Sets the secret key for container synchronization. If you remove the secret key, synchronization is halted. For more information, see Container to Container Synchronization | | X-Versions-Location (Optional) | header | string | The URL-encoded UTF-8 representation of the container that stores previous versions of objects. If neither this nor X-History-Location is set, versioning is disabled for this container. X-Versions-Location and X-History-Location cannot both be set at the same time. For more information about object versioning, see Object versioning. | | X-History-Location (Optional) | header | string | The URL-encoded UTF-8 representation of the container that stores previous versions of objects. If neither this nor X-Versions-Location is set, versioning is disabled for this container. X-History-Location and X-Versions-Location cannot both be set at the same time. For more information about object versioning, see Object versioning. | | X-Remove-Versions-Location (Optional) | header | string | Set to any value to disable versioning. Note that this disables version that was set via X-History-Location as well. | | X-Remove-History-Location (Optional) | header | string | Set to any value to disable" }, { "data": "Note that this disables version that was set via X-Versions-Location as well. | | X-Container-Meta-name (Optional) | header | string | The container metadata, where name is the name of metadata item. You must specify an X-Container-Meta-name header for each metadata item (for each name) that you want to add or update. | | X-Container-Meta-Access-Control-Allow-Origin (Optional) | header | string | Originating URLs allowed to make cross-origin requests (CORS), separated by spaces. This heading applies to the container only, and all objects within the container with this header applied are CORS-enabled for the allowed origin URLs. A browser (user-agent) typically issues a preflighted request , which is an OPTIONS call that verifies the origin is allowed to make the request. The Object Storage service returns 200 if the originating URL is listed in this header parameter, and issues a 401 if the originating URL is not allowed to make a cross-origin request. Once a 200 is returned, the browser makes a second request to the Object Storage service to retrieve the CORS-enabled object. | | X-Container-Meta-Access-Control-Max-Age (Optional) | header | string | Maximum time for the origin to hold the preflight results. A browser may make an OPTIONS call to verify the origin is allowed to make the request. Set the value to an integer number of seconds after the time that the request was received. | | X-Container-Meta-Access-Control-Expose-Headers (Optional) | header | string | Headers the Object Storage service exposes to the browser (technically, through the user-agent setting), in the request response, separated by spaces. By default the Object Storage service returns the following headers: All simple response headers as listed on http://www.w3.org/TR/cors/#simple-response-header. The headers etag, x-timestamp, x-trans-id, x-openstack-request-id. All metadata headers (X-Container-Meta- for containers and X-Object-Meta- for objects). headers listed in X-Container-Meta-Access-Control-Expose-Headers. | | X-Container-Meta-Quota-Bytes (Optional) | header | string | Sets maximum size of the container, in bytes. Typically these values are set by an administrator. Returns a 413 response (request entity too large) when an object PUT operation exceeds this quota value. This value does not take effect immediately. see Container Quotas for more information. | | X-Container-Meta-Quota-Count (Optional) | header | string | Sets maximum object count of the container. Typically these values are set by an administrator. Returns a 413 response (request entity too large) when an object PUT operation exceeds this quota value. This value does not take effect immediately. see Container Quotas for more information. | | X-Container-Meta-Web-Directory-Type (Optional) | header | string | Sets the content-type of directory marker objects. If the header is not set, default is application/directory. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. For example, if you set \"X-Container- Meta-Web-Directory-Type: text/directory\", Object Storage treats 0-byte objects with a content-type of text/directory as directories rather than objects. | | X-Container-Meta-Temp-URL-Key (Optional) | header | string | The secret key value for temporary URLs. | | X-Container-Meta-Temp-URL-Key-2 (Optional) | header | string | A second secret key value for temporary URLs. The second key enables you to rotate keys by having two active keys at the same time. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response" }, { "data": "You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Container-Read (Optional) header string Sets a container access control list (ACL) that grants read access. The scope of the access is specific to the container. The ACL grants the ability to perform GET or HEAD operations on objects in the container or to perform a GET or HEAD operation on the container itself. The format and scope of the ACL is dependent on the authorization system used by the Object Storage service. See Container ACLs for more information. X-Remove-Container-name (Optional) header string Removes the metadata item named name. For example, X-Remove-Container-Read removes the X-Container-Read metadata item and X-Remove-Container-Meta-Blue removes custom metadata. X-Container-Write (Optional) header string Sets a container access control list (ACL) that grants write access. The scope of the access is specific to the container. The ACL grants the ability to perform PUT, POST and DELETE operations on objects in the container. It does not grant write access to the container metadata. The format of the ACL is dependent on the authorization system used by the Object Storage service. See Container ACLs for more information. X-Container-Sync-To (Optional) header string Sets the destination for container synchronization. Used with the secret key indicated in the X -Container-Sync-Key header. If you want to stop a container from synchronizing, send a blank value for the X-Container-Sync-Key header. X-Container-Sync-Key (Optional) header string Sets the secret key for container synchronization. If you remove the secret key, synchronization is halted. For more information, see Container to Container Synchronization X-Versions-Location (Optional) header string The URL-encoded UTF-8 representation of the container that stores previous versions of objects. If neither this nor X-History-Location is set, versioning is disabled for this container. X-Versions-Location and X-History-Location cannot both be set at the same time. For more information about object versioning, see Object versioning. X-History-Location (Optional) header string The URL-encoded UTF-8 representation of the container that stores previous versions of objects. If neither this nor X-Versions-Location is set, versioning is disabled for this container. X-History-Location and X-Versions-Location cannot both be set at the same time. For more information about object versioning, see Object versioning. X-Remove-Versions-Location (Optional) header string Set to any value to disable versioning. Note that this disables version that was set via X-History-Location as" }, { "data": "X-Remove-History-Location (Optional) header string Set to any value to disable versioning. Note that this disables version that was set via X-Versions-Location as well. X-Container-Meta-name (Optional) header string The container metadata, where name is the name of metadata item. You must specify an X-Container-Meta-name header for each metadata item (for each name) that you want to add or update. X-Container-Meta-Access-Control-Allow-Origin (Optional) header string Originating URLs allowed to make cross-origin requests (CORS), separated by spaces. This heading applies to the container only, and all objects within the container with this header applied are CORS-enabled for the allowed origin URLs. A browser (user-agent) typically issues a preflighted request , which is an OPTIONS call that verifies the origin is allowed to make the request. The Object Storage service returns 200 if the originating URL is listed in this header parameter, and issues a 401 if the originating URL is not allowed to make a cross-origin request. Once a 200 is returned, the browser makes a second request to the Object Storage service to retrieve the CORS-enabled object. X-Container-Meta-Access-Control-Max-Age (Optional) header string Maximum time for the origin to hold the preflight results. A browser may make an OPTIONS call to verify the origin is allowed to make the request. Set the value to an integer number of seconds after the time that the request was received. X-Container-Meta-Access-Control-Expose-Headers (Optional) header string Headers the Object Storage service exposes to the browser (technically, through the user-agent setting), in the request response, separated by spaces. By default the Object Storage service returns the following headers: All simple response headers as listed on http://www.w3.org/TR/cors/#simple-response-header. The headers etag, x-timestamp, x-trans-id, x-openstack-request-id. All metadata headers (X-Container-Meta-* for containers and X-Object-Meta-* for objects). headers listed in X-Container-Meta-Access-Control-Expose-Headers. X-Container-Meta-Quota-Bytes (Optional) header string Sets maximum size of the container, in bytes. Typically these values are set by an administrator. Returns a 413 response (request entity too large) when an object PUT operation exceeds this quota value. This value does not take effect immediately. see Container Quotas for more information. X-Container-Meta-Quota-Count (Optional) header string Sets maximum object count of the container. Typically these values are set by an administrator. Returns a 413 response (request entity too large) when an object PUT operation exceeds this quota value. This value does not take effect immediately. see Container Quotas for more information. X-Container-Meta-Web-Directory-Type (Optional) header string Sets the content-type of directory marker objects. If the header is not set, default is application/directory. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. For example, if you set \"X-Container- Meta-Web-Directory-Type: text/directory\", Object Storage treats 0-byte objects with a content-type of text/directory as directories rather than objects. X-Container-Meta-Temp-URL-Key (Optional) header string The secret key value for temporary URLs. X-Container-Meta-Temp-URL-Key-2 (Optional) header string A second secret key value for temporary URLs. The second key enables you to rotate keys by having two active keys at the same time. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request" }, { "data": "If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:|:-|:--|:| | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | Content-Type (Optional) | header | string | If present, this value is the MIME type of the informational or error text in the response body. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | Name In Type Description Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. Content-Type (Optional) header string If present, this value is the MIME type of the informational or error text in the response body. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Show container metadata Shows container metadata, including the number of objects and the total bytes of all objects stored in the container. Show container metadata request: ``` curl -i $publicURL/marktwain -X HEAD -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 204 No Content Content-Length: 0 X-Container-Object-Count: 1 Accept-Ranges: bytes X-Container-Meta-Book: TomSawyer X-Timestamp: 1389727543.65372 X-Container-Meta-Author: SamuelClemens X-Container-Bytes-Used: 14 Content-Type: text/plain; charset=utf-8 X-Trans-Id: tx0287b982a268461b9ec14-0052d826e2 X-Openstack-Request-Id: tx0287b982a268461b9ec14-0052d826e2 Date: Thu, 16 Jan 2014 18:37:22 GMT ``` If the request succeeds, the operation returns the No Content (204) response code. Normal response codes: 204 | Name | In | Type | Description | |:-|:-|:--|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be" }, { "data": "The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Newest (Optional) | header | boolean | If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Newest (Optional) header boolean If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as" }, { "data": "When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:-|:-|:--|:| | X-Container-Meta-name | header | string | The custom container metadata item, where name is the name of the metadata item. One X-Container-Meta-name response header appears for each metadata item (for each name). | | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | X-Container-Object-Count | header | integer | The number of objects. | | X-Container-Bytes-Used | header | integer | The total number of bytes used. | | X-Container-Write (Optional) | header | string | The ACL that grants write access. If there is no ACL, this header is not returned by this operation. See Container ACLs for more information. | | X-Container-Meta-Quota-Bytes (Optional) | header | string | The maximum size of the container, in bytes. If not set, this header is not returned by this operation. | | X-Container-Meta-Quota-Count (Optional) | header | string | The maximum object count of the container. If not set, this header is not returned by this operation. | | Accept-Ranges | header | string | The type of ranges that the object accepts. | | X-Container-Read (Optional) | header | string | The ACL that grants read access. If there is no ACL, this header is not returned by this operation. See Container ACLs for more information. | | X-Container-Meta-Access-Control-Expose-Headers (Optional) | header | string | Headers the Object Storage service exposes to the browser (technically, through the user-agent setting), in the request response, separated by spaces. By default the Object Storage service returns the following headers: All simple response headers as listed on http://www.w3.org/TR/cors/#simple-response-header. The headers etag, x-timestamp, x-trans-id, x-openstack-request-id. All metadata headers (X-Container-Meta- for containers and X-Object-Meta- for objects). headers listed in X-Container-Meta-Access-Control-Expose-Headers. | | X-Container-Meta-Temp-URL-Key (Optional) | header | string | The secret key value for temporary URLs. If not set, this header is not returned in the response. | | X-Container-Meta-Temp-URL-Key-2 (Optional) | header | string | The second secret key value for temporary URLs. If not set, this header is not returned in the response. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Container-Meta-Access-Control-Allow-Origin (Optional) | header | string | Originating URLs allowed to make cross-origin requests (CORS), separated by spaces. This heading applies to the container only, and all objects within the container with this header applied are CORS-enabled for the allowed origin URLs. A browser (user-agent) typically issues a preflighted request , which is an OPTIONS call that verifies the origin is allowed to make the request. The Object Storage service returns 200 if the originating URL is listed in this header parameter, and issues a 401 if the originating URL is not allowed to make a cross-origin request. Once a 200 is returned, the browser makes a second request to the Object Storage service to retrieve the CORS-enabled" }, { "data": "| | X-Container-Meta-Access-Control-Max-Age (Optional) | header | string | Maximum time for the origin to hold the preflight results. A browser may make an OPTIONS call to verify the origin is allowed to make the request. Set the value to an integer number of seconds after the time that the request was received. | | X-Container-Sync-Key (Optional) | header | string | The secret key for container synchronization. If not set, this header is not returned by this operation. | | X-Container-Sync-To (Optional) | header | string | The destination for container synchronization. If not set, this header is not returned by this operation. | | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | | Content-Type (Optional) | header | string | If present, this value is the MIME type of the informational or error text in the response body. | | X-Versions-Location (Optional) | header | string | If present, this container has versioning enabled and the value is the UTF-8 encoded name of another container. For more information about object versioning, see Object versioning. | | X-History-Location (Optional) | header | string | If present, this container has versioning enabled and the value is the UTF-8 encoded name of another container. For more information about object versioning, see Object versioning. | | X-Storage-Policy (Optional) | header | string | In requests, specifies the name of the storage policy to use for the container. In responses, is the storage policy name. The storage policy of the container cannot be changed. | Name In Type Description X-Container-Meta-name header string The custom container metadata item, where name is the name of the metadata item. One X-Container-Meta-name response header appears for each metadata item (for each name). Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. X-Container-Object-Count header integer The number of objects. X-Container-Bytes-Used header integer The total number of bytes used. X-Container-Write (Optional) header string The ACL that grants write access. If there is no ACL, this header is not returned by this operation. See Container ACLs for more information. X-Container-Meta-Quota-Bytes (Optional) header string The maximum size of the container, in bytes. If not set, this header is not returned by this operation. X-Container-Meta-Quota-Count (Optional) header string The maximum object count of the container. If not set, this header is not returned by this operation. Accept-Ranges header string The type of ranges that the object accepts. X-Container-Read (Optional) header string The ACL that grants read access. If there is no ACL, this header is not returned by this operation. See Container ACLs for more information. X-Container-Meta-Access-Control-Expose-Headers (Optional) header string Headers the Object Storage service exposes to the browser (technically, through the user-agent setting), in the request response, separated by spaces. By default the Object Storage service returns the following headers: All simple response headers as listed on http://www.w3.org/TR/cors/#simple-response-header. The headers etag, x-timestamp, x-trans-id, x-openstack-request-id. All metadata headers (X-Container-Meta-* for containers and X-Object-Meta-* for objects). headers listed in" }, { "data": "X-Container-Meta-Temp-URL-Key (Optional) header string The secret key value for temporary URLs. If not set, this header is not returned in the response. X-Container-Meta-Temp-URL-Key-2 (Optional) header string The second secret key value for temporary URLs. If not set, this header is not returned in the response. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Container-Meta-Access-Control-Allow-Origin (Optional) header string Originating URLs allowed to make cross-origin requests (CORS), separated by spaces. This heading applies to the container only, and all objects within the container with this header applied are CORS-enabled for the allowed origin URLs. A browser (user-agent) typically issues a preflighted request , which is an OPTIONS call that verifies the origin is allowed to make the request. The Object Storage service returns 200 if the originating URL is listed in this header parameter, and issues a 401 if the originating URL is not allowed to make a cross-origin request. Once a 200 is returned, the browser makes a second request to the Object Storage service to retrieve the CORS-enabled object. X-Container-Meta-Access-Control-Max-Age (Optional) header string Maximum time for the origin to hold the preflight results. A browser may make an OPTIONS call to verify the origin is allowed to make the request. Set the value to an integer number of seconds after the time that the request was received. X-Container-Sync-Key (Optional) header string The secret key for container synchronization. If not set, this header is not returned by this operation. X-Container-Sync-To (Optional) header string The destination for container synchronization. If not set, this header is not returned by this operation. Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Content-Type (Optional) header string If present, this value is the MIME type of the informational or error text in the response body. X-Versions-Location (Optional) header string If present, this container has versioning enabled and the value is the UTF-8 encoded name of another container. For more information about object versioning, see Object versioning. X-History-Location (Optional) header string If present, this container has versioning enabled and the value is the UTF-8 encoded name of another container. For more information about object versioning, see Object versioning. X-Storage-Policy (Optional) header string In requests, specifies the name of the storage policy to use for the container. In responses, is the storage policy name. The storage policy of the container cannot be changed. Delete container Deletes an empty container. This operation fails unless the container is empty. An empty container has no objects. Delete the steven container: ``` curl -i $publicURL/steven -X DELETE -H \"X-Auth-Token: $token\" ``` If the container does not exist, the response is: ``` HTTP/1.1 404 Not Found Content-Length: 70 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx4d728126b17b43b598bf7-0052d81e34 X-Openstack-Request-Id: tx4d728126b17b43b598bf7-0052d81e34 Date: Thu, 16 Jan 2014 18:00:20 GMT ``` If the container exists and the deletion succeeds, the response is: ```" }, { "data": "204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txf76c375ebece4df19c84c-0052d81f14 X-Openstack-Request-Id: txf76c375ebece4df19c84c-0052d81f14 Date: Thu, 16 Jan 2014 18:04:04 GMT ``` If the container exists but is not empty, the response is: ``` HTTP/1.1 409 Conflict Content-Length: 95 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx7782dc6a97b94a46956b5-0052d81f6b X-Openstack-Request-Id: tx7782dc6a97b94a46956b5-0052d81f6b Date: Thu, 16 Jan 2014 18:05:31 GMT <html> <h1>Conflict </h1> <p>There was a conflict when trying to complete your request. </p> </html> ``` Normal response codes: 204 Error response codes: 404, 409 | Name | In | Type | Description | |:-|:-|:-|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request" }, { "data": "For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:|:-|:--|:| | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | Content-Type (Optional) | header | string | If present, this value is the MIME type of the informational or error text in the response body. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | Name In Type Description Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. Content-Type (Optional) header string If present, this value is the MIME type of the informational or error text in the response body. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Creates, replaces, shows details for, and deletes objects. Copies objects from another object with a new or different name. Updates object metadata. For more information and concepts about objects see Object Storage API overview and Large Objects. Get object content and metadata Downloads the object content and gets the object metadata. This operation returns the object metadata in the response headers and the object content in the response body. If this is a large object, the response body contains the concatenated content of the segment objects. To get the manifest instead of concatenated segment objects for a static large object, use the multipart-manifest query parameter. Example requests and responses: Show object details for the goodbye object in the marktwain container: ``` curl -i $publicURL/marktwain/goodbye -X GET -H \"X-Auth-Token: $token\" ``` ```" }, { "data": "200 OK Content-Length: 14 Accept-Ranges: bytes Last-Modified: Wed, 15 Jan 2014 16:41:49 GMT Etag: 451e372e48e0f6b1114fa0724aa79fa1 X-Timestamp: 1389804109.39027 X-Object-Meta-Orig-Filename: goodbyeworld.txt Content-Type: application/octet-stream X-Trans-Id: tx8145a190241f4cf6b05f5-0052d82a34 X-Openstack-Request-Id: tx8145a190241f4cf6b05f5-0052d82a34 Date: Thu, 16 Jan 2014 18:51:32 GMT Goodbye World! ``` Show object details for the goodbye object, which does not exist, in the janeausten container: ``` curl -i $publicURL/janeausten/goodbye -X GET -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 404 Not Found Content-Length: 70 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx073f7cbb850c4c99934b9-0052d82b04 X-Openstack-Request-Id: tx073f7cbb850c4c99934b9-0052d82b04 Date: Thu, 16 Jan 2014 18:55:00 GMT <html> <h1>Not Found </h1> <p>The resource could not be found. </p> </html> ``` The operation returns the Range Not Satisfiable (416) response code for any ranged GET requests that specify more than: Fifty ranges. Three overlapping ranges. Eight non-increasing ranges. Normal response codes: 200 Error response codes: 416, 404 | Name | In | Type | Description | |:-|:-|:--|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | object (Optional) | path | string | The unique name for the object. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Newest (Optional) | header | boolean | If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. | | tempurlsig | query | string | Used with temporary URLs to sign the request with an HMAC-SHA1 cryptographic signature that defines the allowed HTTP method, expiration date, full path to the object, and the secret key for the temporary URL. For more information about temporary URLs, see Temporary URL middleware. | | tempurlexpires | query | integer | The date and time in UNIX Epoch time stamp format or ISO 8601 UTC timestamp when the signature for temporary URLs expires. For example, 1440619048 or 2015-08-26T19:57:28Z is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. For more information about temporary URLs, see Temporary URL middleware. | | filename (Optional) | query | string | Overrides the default file name. Object Storage generates a default file name for GET temporary URLs that is based on the object name. Object Storage returns this value in the Content-Disposition response header. Browsers can interpret this file name value as a file attachment to save. For more information about temporary URLs, see Temporary URL middleware. | | multipart-manifest (Optional) | query | string | If you include the multipart-manifest=get query parameter and the object is a large object, the object contents are not returned. Instead, the manifest is returned in the X-Object-Manifest response header for dynamic large objects or in the response body for static large" }, { "data": "| | symlink (Optional) | query | string | If you include the symlink=get query parameter and the object is a symlink, then the response will include data and metadata from the symlink itself rather than from the target. | | Range (Optional) | header | string | The ranges of content to get. You can use the Range header to get portions of data by using one or more range specifications. To specify many ranges, separate the range specifications with a comma. The types of range specifications are: - Byte range specification. Use FIRSTBYTEOFFSET to specify the start of the data range, and LASTBYTEOFFSET to specify the end. You can omit the LASTBYTEOFFSET and if you do, the value defaults to the offset of the last byte of data. - Suffix byte range specification. Use LENGTH bytes to specify the length of the data range. The following forms of the header specify the following ranges of data: Range: bytes=-5. The last five bytes. Range: bytes=10-15. The six bytes of data after a 10-byte offset. Range: bytes=10-15,-5. A multi-part response that contains the last five bytes and the six bytes of data after a 10-byte offset. The Content-Type response header contains multipart/byteranges. Range: bytes=4-6. Bytes 4 to 6 inclusive. Range: bytes=2-2. Byte 2, the third byte of the data. Range: bytes=6-. Byte 6 and after. Range: bytes=1-3,2-5. A multi-part response that contains bytes 1 to 3 inclusive, and bytes 2 to 5 inclusive. The Content-Type response header contains multipart/byteranges. | | If-Match (Optional) | header | string | See Request for Comments: 2616. | | If-None-Match (Optional) | header | string | A client that has one or more entities previously obtained from the resource can verify that none of those entities is current by including a list of their associated entity tags in the If-None-Match header field. See Request for Comments: 2616 for details. | | If-Modified-Since (Optional) | header | string | See Request for Comments: 2616. | | If-Unmodified-Since (Optional) | header | string | See Request for Comments: 2616. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object" }, { "data": "For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. object (Optional) path string The unique name for the object. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Newest (Optional) header boolean If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. tempurlsig query string Used with temporary URLs to sign the request with an HMAC-SHA1 cryptographic signature that defines the allowed HTTP method, expiration date, full path to the object, and the secret key for the temporary URL. For more information about temporary URLs, see Temporary URL middleware. tempurlexpires query integer The date and time in UNIX Epoch time stamp format or ISO 8601 UTC timestamp when the signature for temporary URLs expires. For example, 1440619048 or 2015-08-26T19:57:28Z is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. For more information about temporary URLs, see Temporary URL middleware. filename (Optional) query string Overrides the default file name. Object Storage generates a default file name for GET temporary URLs that is based on the object name. Object Storage returns this value in the Content-Disposition response header. Browsers can interpret this file name value as a file attachment to save. For more information about temporary URLs, see Temporary URL middleware. multipart-manifest (Optional) query string If you include the multipart-manifest=get query parameter and the object is a large object, the object contents are not returned. Instead, the manifest is returned in the X-Object-Manifest response header for dynamic large objects or in the response body for static large objects. symlink (Optional) query string If you include the symlink=get query parameter and the object is a symlink, then the response will include data and metadata from the symlink itself rather than from the target. Range (Optional) header string The ranges of content to get. You can use the Range header to get portions of data by using one or more range specifications. To specify many ranges, separate the range specifications with a comma. The types of range specifications are: - Byte range specification. Use FIRSTBYTEOFFSET to specify the start of the data range, and LASTBYTEOFFSET to specify the end. You can omit the LASTBYTEOFFSET and if you do, the value defaults to the offset of the last byte of data. Suffix byte range specification. Use LENGTH bytes to specify the length of the data range. The following forms of the header specify the following ranges of data: Range: bytes=-5. The last five bytes. Range: bytes=10-15. The six bytes of data after a 10-byte offset. Range: bytes=10-15,-5. A multi-part response that contains the last five bytes and the six bytes of data after a 10-byte offset. The Content-Type response header contains multipart/byteranges. Range: bytes=4-6. Bytes 4 to 6 inclusive. Range: bytes=2-2. Byte 2, the third byte of the data. Range: bytes=6-. Byte 6 and after. Range: bytes=1-3,2-5. A multi-part response that contains bytes 1 to 3 inclusive, and bytes 2 to 5 inclusive. The Content-Type response header contains multipart/byteranges. If-Match (Optional) header string See Request for Comments:" }, { "data": "If-None-Match (Optional) header string A client that has one or more entities previously obtained from the resource can verify that none of those entities is current by including a list of their associated entity tags in the If-None-Match header field. See Request for Comments: 2616 for details. If-Modified-Since (Optional) header string See Request for Comments: 2616. If-Unmodified-Since (Optional) header string See Request for Comments: 2616. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:|:-|:--|:| | Content-Length | header | string | The length of the object content in the response body, in bytes. | | Content-Type | header | string | If the operation succeeds, this value is the MIME type of the object. If the operation fails, this value is the MIME type of the error text in the response body. | | X-Object-Meta-name (Optional) | header | string | If present, the custom object metadata item, where name is the name of the metadata item. One``X-Object-Meta-name`` response header appears for each metadata name item. | | Content-Disposition (Optional) | header | string | If present, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. If not set, this header is not returned by this operation. | | Content-Encoding (Optional) | header | string | If present, the value of the Content-Encoding metadata. If not set, the operation does not return this header. | | X-Delete-At (Optional) | header | integer | If present, specifies date and time in UNIX Epoch time stamp format when the system removes the object. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | Accept-Ranges | header | string | The type of ranges that the object accepts. | | X-Object-Manifest (Optional) | header | string | If present, this is a dynamic large object manifest object. The value is the container and object name prefix of the segment objects in the form container/prefix. | | Last-Modified | header | string | The date and time when the object was created or its metadata was changed. The date and time is formatted as shown in this example: Fri, 12 Aug 2016 14:24:16 GMT The time is always in UTC. | | ETag | header | string | For objects smaller than 5 GB, this value is the MD5 checksum of the object content. The value is not quoted. For manifest objects, this value is the MD5 checksum of the concatenated string of ETag values for each of the segments in the manifest, and not the MD5 checksum of the content that was downloaded. Also the value is enclosed in double-quote" }, { "data": "You are strongly recommended to compute the MD5 checksum of the response body as it is received and compare this value with the one in the ETag header. If they differ, the content was corrupted, so retry the operation. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Static-Large-Object | header | boolean | Set to true if this object is a static large object manifest object. | | X-Symlink-Target (Optional) | header | string | If present, this is a symlink object. The value is the relative path of the target object in the format <container>/<object>. | | X-Symlink-Target-Account (Optional) | header | string | If present, and X-Symlink-Target is present, then this is a cross-account symlink to an object in the account specified in the value. | Name In Type Description Content-Length header string The length of the object content in the response body, in bytes. Content-Type header string If the operation succeeds, this value is the MIME type of the object. If the operation fails, this value is the MIME type of the error text in the response body. X-Object-Meta-name (Optional) header string If present, the custom object metadata item, where name is the name of the metadata item. One``X-Object-Meta-name`` response header appears for each metadata name item. Content-Disposition (Optional) header string If present, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. If not set, this header is not returned by this operation. Content-Encoding (Optional) header string If present, the value of the Content-Encoding metadata. If not set, the operation does not return this header. X-Delete-At (Optional) header integer If present, specifies date and time in UNIX Epoch time stamp format when the system removes the object. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. Accept-Ranges header string The type of ranges that the object accepts. X-Object-Manifest (Optional) header string If present, this is a dynamic large object manifest object. The value is the container and object name prefix of the segment objects in the form container/prefix. Last-Modified header string The date and time when the object was created or its metadata was changed. The date and time is formatted as shown in this example: Fri, 12 Aug 2016 14:24:16 GMT The time is always in UTC. ETag header string For objects smaller than 5 GB, this value is the MD5 checksum of the object content. The value is not quoted. For manifest objects, this value is the MD5 checksum of the concatenated string of ETag values for each of the segments in the manifest, and not the MD5 checksum of the content that was" }, { "data": "Also the value is enclosed in double-quote characters. You are strongly recommended to compute the MD5 checksum of the response body as it is received and compare this value with the one in the ETag header. If they differ, the content was corrupted, so retry the operation. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Static-Large-Object header boolean Set to true if this object is a static large object manifest object. X-Symlink-Target (Optional) header string If present, this is a symlink object. The value is the relative path of the target object in the format <container>/<object>. X-Symlink-Target-Account (Optional) header string If present, and X-Symlink-Target is present, then this is a cross-account symlink to an object in the account specified in the value. See examples above. Create or replace object Creates an object with data content and metadata, or replaces an existing object with data content and metadata. The PUT operation always creates an object. If you use this operation on an existing object, you replace the existing object and metadata rather than modifying the object. Consequently, this operation returns the Created (201) response code. If you use this operation to copy a manifest object, the new object is a normal object and not a copy of the manifest. Instead it is a concatenation of all the segment objects. This means that you cannot copy objects larger than 5 GB. Note that the provider may have limited the characters which are allowed in an object name. Any name limits are exposed under the name_check key in the /info discoverability response. Regardless of name_check limitations, names must be URL quoted UTF-8. To create custom metadata, use the X-Object-Meta-name header, where name is the name of the metadata item. Note Metadata keys (the name of the metadata) must be treated as case-insensitive at all times. These keys can contain ASCII 7-bit characters that are not control (0-31) characters, DEL, or a separator character, according to HTTP/1.1 . The underscore character is silently converted to a hyphen. Example requests and responses: Create object: ``` curl -i $publicURL/janeausten/helloworld.txt -X PUT -d \"Hello\" -H \"Content-Type: text/html; charset=UTF-8\" -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 201 Created Last-Modified: Fri, 17 Jan 2014 17:28:35 GMT Content-Length: 0 Etag: 8b1a9953c4611296a827abf8c47804d7 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx4d5e4f06d357462bb732f-0052d96843 X-Openstack-Request-Id: tx4d5e4f06d357462bb732f-0052d96843 Date: Fri, 17 Jan 2014 17:28:35 GMT ``` Replace object: ``` curl -i $publicURL/janeausten/helloworld.txt -X PUT -d \"Hola\" -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 201 Created Last-Modified: Fri, 17 Jan 2014 17:28:35 GMT Content-Length: 0 Etag: f688ae26e9cfa3ba6235477831d5122e Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx4d5e4f06d357462bb732f-0052d96843 X-Openstack-Request-Id: tx4d5e4f06d357462bb732f-0052d96843 Date: Fri, 17 Jan 2014 17:28:35 GMT ``` The Created (201) response code indicates a successful write. If the container for the object does not already exist, the operation returns the 404 Not Found response code. If the request times out, the operation returns the Request Timeout (408) response code. The Length Required (411) response code indicates a missing Transfer-Encoding or Content-Length request" }, { "data": "If the MD5 checksum of the data that is written to the object store does not match the optional ETag value, the operation returns the Unprocessable Entity (422) response code. Normal response codes: 201 Error response codes: 404, 408, 411, 422 | Name | In | Type | Description | |:|:-|:--|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | object (Optional) | path | string | The unique name for the object. | | multipart-manifest (Optional) | query | string | If you include the multipart-manifest=put query parameter, the object is a static large object manifest and the body contains the manifest. See Static large objects for more information. | | tempurlsig | query | string | Used with temporary URLs to sign the request with an HMAC-SHA1 cryptographic signature that defines the allowed HTTP method, expiration date, full path to the object, and the secret key for the temporary URL. For more information about temporary URLs, see Temporary URL middleware. | | tempurlexpires | query | integer | The date and time in UNIX Epoch time stamp format or ISO 8601 UTC timestamp when the signature for temporary URLs expires. For example, 1440619048 or 2015-08-26T19:57:28Z is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. For more information about temporary URLs, see Temporary URL middleware. | | X-Object-Manifest (Optional) | header | string | Set to specify that this is a dynamic large object manifest object. The value is the container and object name prefix of the segment objects in the form container/prefix. You must UTF-8-encode and then URL-encode the names of the container and prefix before you include them in this header. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | Content-Length (Optional) | header | integer | Set to the length of the object content (i.e. the length in bytes of the request body). Do not set if chunked transfer encoding is being used. | | Transfer-Encoding (Optional) | header | string | Set to chunked to enable chunked transfer encoding. If used, do not set the Content-Length header to a non-zero value. | | Content-Type (Optional) | header | string | Sets the MIME type for the object. | | X-Detect-Content-Type (Optional) | header | boolean | If set to true, Object Storage guesses the content type based on the file extension and ignores the value sent in the Content-Type header, if present. | | X-Copy-From (Optional) | header | string | If set, this is the name of an object used to create the new object by copying the X-Copy-From object. The value is in form {container}/{object}. You must UTF-8-encode and then URL-encode the names of the container and object before you include them in the" }, { "data": "Using PUT with X-Copy-From has the same effect as using the COPY operation to copy an object. Using Range header with X-Copy-From will create a new partial copied object with bytes set by Range. | | X-Copy-From-Account (Optional) | header | string | Specifies the account name where the object is copied from. If not specified, the object is copied from the account which owns the new object (i.e., the account in the path). | | ETag (Optional) | header | string | The MD5 checksum value of the request body. For example, the MD5 checksum value of the object content. For manifest objects, this value is the MD5 checksum of the concatenated string of ETag values for each of the segments in the manifest. You are strongly recommended to compute the MD5 checksum value and include it in the request. This enables the Object Storage API to check the integrity of the upload. The value is not quoted. | | Content-Disposition (Optional) | header | string | If set, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. | | Content-Encoding (Optional) | header | string | If set, the value of the Content-Encoding metadata. | | X-Delete-At (Optional) | header | integer | The date and time in UNIX Epoch time stamp format when the system removes the object. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. The value should be a positive integer corresponding to a time in the future. If both X-Delete-After and X-Delete-At are set then X-Delete-After takes precedence. | | X-Delete-After (Optional) | header | integer | The number of seconds after which the system removes the object. The value should be a positive integer. Internally, the Object Storage system uses this value to generate an X-Delete-At metadata item. If both X-Delete-After and X-Delete-At are set then X-Delete-After takes precedence. | | X-Object-Meta-name (Optional) | header | string | The object metadata, where name is the name of the metadata item. You must specify an X-Object-Meta-name header for each metadata name item that you want to add or update. | | If-None-Match (Optional) | header | string | In combination with Expect: 100-Continue, specify an \"If-None-Match: *\" header to query whether the server already has a copy of the object before any data is sent. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | | X-Symlink-Target (Optional) | header | string | Set to specify that this is a symlink" }, { "data": "The value is the relative path of the target object in the format <container>/<object>. The target object does not need to exist at the time of symlink creation. You must UTF-8-encode and then URL-encode the names of the container and object before you include them in this header. | | X-Symlink-Target-Account (Optional) | header | string | Set to specify that this is a cross-account symlink to an object in the account specified in the value. The X-Symlink-Target must also be set for this to be effective. You must UTF-8-encode and then URL-encode the account name before you include it in this header. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. object (Optional) path string The unique name for the object. multipart-manifest (Optional) query string If you include the multipart-manifest=put query parameter, the object is a static large object manifest and the body contains the manifest. See Static large objects for more information. tempurlsig query string Used with temporary URLs to sign the request with an HMAC-SHA1 cryptographic signature that defines the allowed HTTP method, expiration date, full path to the object, and the secret key for the temporary URL. For more information about temporary URLs, see Temporary URL middleware. tempurlexpires query integer The date and time in UNIX Epoch time stamp format or ISO 8601 UTC timestamp when the signature for temporary URLs expires. For example, 1440619048 or 2015-08-26T19:57:28Z is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. For more information about temporary URLs, see Temporary URL middleware. X-Object-Manifest (Optional) header string Set to specify that this is a dynamic large object manifest object. The value is the container and object name prefix of the segment objects in the form container/prefix. You must UTF-8-encode and then URL-encode the names of the container and prefix before you include them in this header. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. Content-Length (Optional) header integer Set to the length of the object content (i.e. the length in bytes of the request body). Do not set if chunked transfer encoding is being used. Transfer-Encoding (Optional) header string Set to chunked to enable chunked transfer encoding. If used, do not set the Content-Length header to a non-zero value. Content-Type (Optional) header string Sets the MIME type for the object. X-Detect-Content-Type (Optional) header boolean If set to true, Object Storage guesses the content type based on the file extension and ignores the value sent in the Content-Type header, if present. X-Copy-From (Optional) header string If set, this is the name of an object used to create the new object by copying the X-Copy-From object. The value is in form {container}/{object}. You must UTF-8-encode and then URL-encode the names of the container and object before you include them in the header. Using PUT with X-Copy-From has the same effect as using the COPY operation to copy an" }, { "data": "Using Range header with X-Copy-From will create a new partial copied object with bytes set by Range. X-Copy-From-Account (Optional) header string Specifies the account name where the object is copied from. If not specified, the object is copied from the account which owns the new object (i.e., the account in the path). ETag (Optional) header string The MD5 checksum value of the request body. For example, the MD5 checksum value of the object content. For manifest objects, this value is the MD5 checksum of the concatenated string of ETag values for each of the segments in the manifest. You are strongly recommended to compute the MD5 checksum value and include it in the request. This enables the Object Storage API to check the integrity of the upload. The value is not quoted. Content-Disposition (Optional) header string If set, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. Content-Encoding (Optional) header string If set, the value of the Content-Encoding metadata. X-Delete-At (Optional) header integer The date and time in UNIX Epoch time stamp format when the system removes the object. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. The value should be a positive integer corresponding to a time in the future. If both X-Delete-After and X-Delete-At are set then X-Delete-After takes precedence. X-Delete-After (Optional) header integer The number of seconds after which the system removes the object. The value should be a positive integer. Internally, the Object Storage system uses this value to generate an X-Delete-At metadata item. If both X-Delete-After and X-Delete-At are set then X-Delete-After takes precedence. X-Object-Meta-name (Optional) header string The object metadata, where name is the name of the metadata item. You must specify an X-Object-Meta-name header for each metadata name item that you want to add or update. If-None-Match (Optional) header string In combination with Expect: 100-Continue, specify an \"If-None-Match: *\" header to query whether the server already has a copy of the object before any data is sent. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. X-Symlink-Target (Optional) header string Set to specify that this is a symlink object. The value is the relative path of the target object in the format <container>/<object>. The target object does not need to exist at the time of symlink creation. You must UTF-8-encode and then URL-encode the names of the container and object before you include them in this" }, { "data": "X-Symlink-Target-Account (Optional) header string Set to specify that this is a cross-account symlink to an object in the account specified in the value. The X-Symlink-Target must also be set for this to be effective. You must UTF-8-encode and then URL-encode the account name before you include it in this header. | Name | In | Type | Description | |:--|:-|:--|:--| | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | ETag | header | string | The MD5 checksum of the uploaded object content. The value is not quoted. If it is an SLO, it would be MD5 checksum of the segments etags. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | Content-Type | header | string | If the operation succeeds, this value is the MIME type of the object. If the operation fails, this value is the MIME type of the error text in the response body. | | last_modified | body | string | The date and time when the object was last modified. The date and time stamp format is ISO 8601: CCYY-MM-DDThh:mm:sshh:mm For example, 2015-08-27T09:49:58-05:00. The hh:mm value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is -05:00. | Name In Type Description Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. ETag header string The MD5 checksum of the uploaded object content. The value is not quoted. If it is an SLO, it would be MD5 checksum of the segments etags. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. Content-Type header string If the operation succeeds, this value is the MIME type of the object. If the operation fails, this value is the MIME type of the error text in the response body. last_modified body string The date and time when the object was last modified. The date and time stamp format is ISO 8601: ``` CCYY-MM-DDThh:mm:sshh:mm ``` For example," }, { "data": "The hh:mm value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is -05:00. Copy object Copies an object to another object in the object store. You can copy an object to a new object with the same name. Copying to the same name is an alternative to using POST to add metadata to an object. With POST, you must specify all the metadata. With COPY, you can add additional metadata to the object. With COPY, you can set the X-Fresh-Metadata header to true to copy the object without any existing metadata. Alternatively, you can use PUT with the X-Copy-From request header to accomplish the same operation as the COPY object operation. The COPY operation always creates an object. If you use this operation on an existing object, you replace the existing object and metadata rather than modifying the object. Consequently, this operation returns the Created (201) response code. Normally, if you use this operation to copy a manifest object, the new object is a normal object and not a copy of the manifest. Instead it is a concatenation of all the segment objects. This means that you cannot copy objects larger than 5 GB in size. To copy the manifest object, you include the multipart-manifest=get query string in the COPY request. The new object contains the same manifest as the original. The segment objects are not copied. Instead, both the original and new manifest objects share the same set of segment objects. To copy a symlink either with a COPY or a PUT with the X-Copy-From request, include the symlink=get query string. The new symlink will have the same target as the original. The target object is not copied. Instead, both the original and new symlinks point to the same target object. All metadata is preserved during the object copy. If you specify metadata on the request to copy the object, either PUT or COPY , the metadata overwrites any conflicting keys on the target (new) object. Example requests and responses: Copy the goodbye object from the marktwain container to the janeausten container: ``` curl -i $publicURL/marktwain/goodbye -X COPY -H \"X-Auth-Token: $token\" -H \"Destination: janeausten/goodbye\" ``` ``` HTTP/1.1 201 Created Content-Length: 0 X-Copied-From-Last-Modified: Thu, 16 Jan 2014 21:19:45 GMT X-Copied-From: marktwain/goodbye Last-Modified: Fri, 17 Jan 2014 18:22:57 GMT Etag: 451e372e48e0f6b1114fa0724aa79fa1 Content-Type: text/html; charset=UTF-8 X-Object-Meta-Movie: AmericanPie X-Trans-Id: txdcb481ad49d24e9a81107-0052d97501 X-Openstack-Request-Id: txdcb481ad49d24e9a81107-0052d97501 Date: Fri, 17 Jan 2014 18:22:57 GMT ``` Alternatively, you can use PUT to copy the goodbye object from the marktwain container to the janeausten container. This request requires a Content-Length header, even if it is set to zero (0). ``` curl -i $publicURL/janeausten/goodbye -X PUT -H \"X-Auth-Token: $token\" -H \"X-Copy-From: /marktwain/goodbye\" -H \"Content-Length: 0\" ``` ``` HTTP/1.1 201 Created Content-Length: 0 X-Copied-From-Last-Modified: Thu, 16 Jan 2014 21:19:45 GMT X-Copied-From: marktwain/goodbye Last-Modified: Fri, 17 Jan 2014 18:22:57 GMT Etag: 451e372e48e0f6b1114fa0724aa79fa1 Content-Type: text/html; charset=UTF-8 X-Object-Meta-Movie: AmericanPie X-Trans-Id: txdcb481ad49d24e9a81107-0052d97501 X-Openstack-Request-Id: txdcb481ad49d24e9a81107-0052d97501 Date: Fri, 17 Jan 2014 18:22:57 GMT ``` When several replicas exist, the system copies from the most recent replica. That is, the COPY operation behaves as though the X-Newest header is in the request. Normal response codes: 201 | Name | In | Type | Description | |:-|:-|:--|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be" }, { "data": "The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | object (Optional) | path | string | The unique name for the object. | | multipart-manifest (Optional) | query | string | If you include the multipart-manifest=get query parameter and the object is a large object, the object contents are not copied. Instead, the manifest is copied to the new object. | | symlink (Optional) | query | string | If you include the symlink=get query parameter and the object is a symlink, the target object contents are not copied. Instead, the symlink is copied to create a new symlink to the same target. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | Destination | header | string | The container and object name of the destination object in the form of /container/object. You must UTF-8-encode and then URL-encode the names of the destination container and object before you include them in this header. | | Destination-Account (Optional) | header | string | Specifies the account name where the object is copied to. If not specified, the object is copied to the account which owns the object (i.e., the account in the path). | | Content-Type (Optional) | header | string | Sets the MIME type for the object. | | Content-Encoding (Optional) | header | string | If set, the value of the Content-Encoding metadata. | | Content-Disposition (Optional) | header | string | If set, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. | | X-Object-Meta-name (Optional) | header | string | The object metadata, where name is the name of the metadata item. You must specify an X-Object-Meta-name header for each metadata name item that you want to add or update. | | X-Fresh-Metadata (Optional) | header | boolean | Enables object creation that omits existing user metadata. If set to true, the COPY request creates an object without existing user metadata. Default value is false. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the" }, { "data": "An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. object (Optional) path string The unique name for the object. multipart-manifest (Optional) query string If you include the multipart-manifest=get query parameter and the object is a large object, the object contents are not copied. Instead, the manifest is copied to the new object. symlink (Optional) query string If you include the symlink=get query parameter and the object is a symlink, the target object contents are not copied. Instead, the symlink is copied to create a new symlink to the same target. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. Destination header string The container and object name of the destination object in the form of /container/object. You must UTF-8-encode and then URL-encode the names of the destination container and object before you include them in this header. Destination-Account (Optional) header string Specifies the account name where the object is copied to. If not specified, the object is copied to the account which owns the object (i.e., the account in the path). Content-Type (Optional) header string Sets the MIME type for the object. Content-Encoding (Optional) header string If set, the value of the Content-Encoding metadata. Content-Disposition (Optional) header string If set, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. X-Object-Meta-name (Optional) header string The object metadata, where name is the name of the metadata item. You must specify an X-Object-Meta-name header for each metadata name item that you want to add or update. X-Fresh-Metadata (Optional) header boolean Enables object creation that omits existing user metadata. If set to true, the COPY request creates an object without existing user metadata. Default value is false. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the" }, { "data": "| Name | In | Type | Description | |:|:-|:--|:--| | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | X-Copied-From-Last-Modified (Optional) | header | integer | For a copied object, the date and time in UNIX Epoch time stamp format when the container and object name from which the new object was copied was last modified. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Copied-From (Optional) | header | string | For a copied object, shows the container and object name from which the new object was copied. The value is in the {container}/{object} format. | | X-Copied-From-Account (Optional) | header | string | For a copied object, shows the account from which the new object was copied. | | Last-Modified | header | string | The date and time when the object was created or its metadata was changed. The date and time is formatted as shown in this example: Fri, 12 Aug 2016 14:24:16 GMT The time is always in UTC. | | ETag | header | string | The MD5 checksum of the copied object content. The value is not quoted. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | Content-Type | header | string | If the operation succeeds, this value is the MIME type of the object. If the operation fails, this value is the MIME type of the error text in the response body. | Name In Type Description Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. X-Copied-From-Last-Modified (Optional) header integer For a copied object, the date and time in UNIX Epoch time stamp format when the container and object name from which the new object was copied was last modified. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Copied-From (Optional) header string For a copied object, shows the container and object name from which the new object was copied. The value is in the {container}/{object} format. X-Copied-From-Account (Optional) header string For a copied object, shows the account from which the new object was copied. Last-Modified header string The date and time when the object was created or its metadata was changed. The date and time is formatted as shown in this example: Fri, 12 Aug 2016 14:24:16 GMT The time is always in UTC. ETag header string The MD5 checksum of the copied object content. The value is not quoted. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Trans-Id header string A unique transaction ID for this" }, { "data": "Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. Content-Type header string If the operation succeeds, this value is the MIME type of the object. If the operation fails, this value is the MIME type of the error text in the response body. Delete object Permanently deletes an object from the object store. Object deletion occurs immediately at request time. Any subsequent GET, HEAD, POST, or DELETE operations will return a 404 Not Found error code. For static large object manifests, you can add the ?multipart- manifest=delete query parameter. This operation deletes the segment objects and, if all deletions succeed, this operation deletes the manifest object. A DELETE request made to a symlink path will delete the symlink rather than the target object. An alternative to using the DELETE operation is to use the POST operation with the bulk-delete query parameter. Example request and response: Delete the helloworld object from the marktwain container: ``` curl -i $publicURL/marktwain/helloworld -X DELETE -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx36c7606fcd1843f59167c-0052d6fdac X-Openstack-Request-Id: tx36c7606fcd1843f59167c-0052d6fdac Date: Wed, 15 Jan 2014 21:29:16 GMT ``` Typically, the DELETE operation does not return a response body. However, with the multipart-manifest=delete query parameter, the response body contains a list of manifest and segment objects and the status of their DELETE operations. Normal response codes: 204 | Name | In | Type | Description | |:|:-|:-|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | object (Optional) | path | string | The unique name for the object. | | multipart-manifest (Optional) | query | string | If you include the multipart-manifest=delete query parameter and the object is a static large object, the segment objects and manifest object are deleted. If you omit the multipart-manifest=delete query parameter and the object is a static large object, the manifest object is deleted but the segment objects are not deleted. The response body will contain the status of the deletion of every processed segment object. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage" }, { "data": "The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. object (Optional) path string The unique name for the object. multipart-manifest (Optional) query string If you include the multipart-manifest=delete query parameter and the object is a static large object, the segment objects and manifest object are deleted. If you omit the multipart-manifest=delete query parameter and the object is a static large object, the manifest object is deleted but the segment objects are not deleted. The response body will contain the status of the deletion of every processed segment object. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:|:-|:--|:| | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28" }, { "data": "| | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | Content-Type (Optional) | header | string | If present, this value is the MIME type of the informational or error text in the response body. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | Name In Type Description Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. Content-Type (Optional) header string If present, this value is the MIME type of the informational or error text in the response body. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Show object metadata Shows object metadata. Example requests and responses: Show object metadata: ``` curl $publicURL/marktwain/goodbye --head -H \"X-Auth-Token: $token\" ``` ``` HTTP/1.1 200 OK Content-Length: 14 Accept-Ranges: bytes Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT Etag: 451e372e48e0f6b1114fa0724aa79fa1 X-Timestamp: 1389906751.73463 X-Object-Meta-Book: GoodbyeColumbus Content-Type: application/octet-stream X-Trans-Id: tx37ea34dcd1ed48ca9bc7d-0052d84b6f X-Openstack-Request-Id: tx37ea34dcd1ed48ca9bc7d-0052d84b6f Date: Thu, 16 Jan 2014 21:13:19 GMT ``` Note: The --head option was used in the above example. If we had used -i -X HEAD and the Content-Length response header is non-zero, the cURL command stalls after it prints the response headers because it is waiting for a response body. However, the Object Storage system does not return a response body for the HEAD operation. If the request succeeds, the operation returns the 200 response code. Normal response codes: 200 | Name | In | Type | Description | |:-|:-|:--|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | object (Optional) | path | string | The unique name for the object. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more" }, { "data": "| | tempurlsig | query | string | Used with temporary URLs to sign the request with an HMAC-SHA1 cryptographic signature that defines the allowed HTTP method, expiration date, full path to the object, and the secret key for the temporary URL. For more information about temporary URLs, see Temporary URL middleware. | | tempurlexpires | query | integer | The date and time in UNIX Epoch time stamp format or ISO 8601 UTC timestamp when the signature for temporary URLs expires. For example, 1440619048 or 2015-08-26T19:57:28Z is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. For more information about temporary URLs, see Temporary URL middleware. | | filename (Optional) | query | string | Overrides the default file name. Object Storage generates a default file name for GET temporary URLs that is based on the object name. Object Storage returns this value in the Content-Disposition response header. Browsers can interpret this file name value as a file attachment to save. For more information about temporary URLs, see Temporary URL middleware. | | multipart-manifest (Optional) | query | string | If you include the multipart-manifest=get query parameter and the object is a large object, the object metadata is not returned. Instead, the response headers will include the manifest metadata and for dynamic large objects the X-Object-Manifest response header. | | symlink (Optional) | query | string | If you include the symlink=get query parameter and the object is a symlink, then the response will include data and metadata from the symlink itself rather than from the target. | | X-Newest (Optional) | header | boolean | If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. | | If-Match (Optional) | header | string | See Request for Comments: 2616. | | If-None-Match (Optional) | header | string | A client that has one or more entities previously obtained from the resource can verify that none of those entities is current by including a list of their associated entity tags in the If-None-Match header field. See Request for Comments: 2616 for details. | | If-Modified-Since (Optional) | header | string | See Request for Comments: 2616. | | If-Unmodified-Since (Optional) | header | string | See Request for Comments: 2616. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the" }, { "data": "The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. object (Optional) path string The unique name for the object. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. tempurlsig query string Used with temporary URLs to sign the request with an HMAC-SHA1 cryptographic signature that defines the allowed HTTP method, expiration date, full path to the object, and the secret key for the temporary URL. For more information about temporary URLs, see Temporary URL middleware. tempurlexpires query integer The date and time in UNIX Epoch time stamp format or ISO 8601 UTC timestamp when the signature for temporary URLs expires. For example, 1440619048 or 2015-08-26T19:57:28Z is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. For more information about temporary URLs, see Temporary URL middleware. filename (Optional) query string Overrides the default file name. Object Storage generates a default file name for GET temporary URLs that is based on the object name. Object Storage returns this value in the Content-Disposition response header. Browsers can interpret this file name value as a file attachment to save. For more information about temporary URLs, see Temporary URL middleware. multipart-manifest (Optional) query string If you include the multipart-manifest=get query parameter and the object is a large object, the object metadata is not returned. Instead, the response headers will include the manifest metadata and for dynamic large objects the X-Object-Manifest response header. symlink (Optional) query string If you include the symlink=get query parameter and the object is a symlink, then the response will include data and metadata from the symlink itself rather than from the target. X-Newest (Optional) header boolean If set to true , Object Storage queries all replicas to return the most recent one. If you omit this header, Object Storage responds faster after it finds one valid replica. Because setting this header to true is more expensive for the back end, use it only when it is absolutely needed. If-Match (Optional) header string See Request for Comments: 2616. If-None-Match (Optional) header string A client that has one or more entities previously obtained from the resource can verify that none of those entities is current by including a list of their associated entity tags in the If-None-Match header field. See Request for Comments: 2616 for details. If-Modified-Since (Optional) header string See Request for Comments: 2616. If-Unmodified-Since (Optional) header string See Request for Comments: 2616. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage transactions. The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request" }, { "data": "If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:|:-|:--|:| | Content-Length | header | string | HEAD operations do not return content. The Content-Length header value is not the size of the response body but is the size of the object, in bytes. | | X-Object-Meta-name (Optional) | header | string | The object metadata, where name is the name of the metadata item. You must specify an X-Object-Meta-name header for each metadata name item that you want to add or update. | | Content-Disposition (Optional) | header | string | If present, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. If not set, this header is not returned by this operation. | | Content-Encoding (Optional) | header | string | If present, the value of the Content-Encoding metadata. If not set, the operation does not return this header. | | X-Delete-At (Optional) | header | integer | If present, specifies date and time in UNIX Epoch time stamp format when the system removes the object. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Object-Manifest (Optional) | header | string | If present, this is a dynamic large object manifest object. The value is the container and object name prefix of the segment objects in the form container/prefix. | | Last-Modified | header | string | The date and time when the object was created or its metadata was changed. The date and time is formatted as shown in this example: Fri, 12 Aug 2016 14:24:16 GMT The time is always in UTC. | | ETag | header | string | For objects smaller than 5 GB, this value is the MD5 checksum of the object content. The value is not quoted. For manifest objects, this value is the MD5 checksum of the concatenated string of ETag values for each of the segments in the manifest, and not the MD5 checksum of the content that was downloaded. Also the value is enclosed in double-quote characters. You are strongly recommended to compute the MD5 checksum of the response body as it is received and compare this value with the one in the ETag header. If they differ, the content was corrupted, so retry the operation. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in" }, { "data": "| | X-Static-Large-Object | header | boolean | Set to true if this object is a static large object manifest object. | | Content-Type | header | string | If the operation succeeds, this value is the MIME type of the object. If the operation fails, this value is the MIME type of the error text in the response body. | | X-Symlink-Target (Optional) | header | string | If present, this is a symlink object. The value is the relative path of the target object in the format <container>/<object>. | | X-Symlink-Target-Account (Optional) | header | string | If present, and X-Symlink-Target is present, then this is a cross-account symlink to an object in the account specified in the value. | Name In Type Description Content-Length header string HEAD operations do not return content. The Content-Length header value is not the size of the response body but is the size of the object, in bytes. X-Object-Meta-name (Optional) header string The object metadata, where name is the name of the metadata item. You must specify an X-Object-Meta-name header for each metadata name item that you want to add or update. Content-Disposition (Optional) header string If present, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. If not set, this header is not returned by this operation. Content-Encoding (Optional) header string If present, the value of the Content-Encoding metadata. If not set, the operation does not return this header. X-Delete-At (Optional) header integer If present, specifies date and time in UNIX Epoch time stamp format when the system removes the object. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Object-Manifest (Optional) header string If present, this is a dynamic large object manifest object. The value is the container and object name prefix of the segment objects in the form container/prefix. Last-Modified header string The date and time when the object was created or its metadata was changed. The date and time is formatted as shown in this example: Fri, 12 Aug 2016 14:24:16 GMT The time is always in UTC. ETag header string For objects smaller than 5 GB, this value is the MD5 checksum of the object content. The value is not quoted. For manifest objects, this value is the MD5 checksum of the concatenated string of ETag values for each of the segments in the manifest, and not the MD5 checksum of the content that was downloaded. Also the value is enclosed in double-quote characters. You are strongly recommended to compute the MD5 checksum of the response body as it is received and compare this value with the one in the ETag header. If they differ, the content was corrupted, so retry the operation. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in" }, { "data": "X-Static-Large-Object header boolean Set to true if this object is a static large object manifest object. Content-Type header string If the operation succeeds, this value is the MIME type of the object. If the operation fails, this value is the MIME type of the error text in the response body. X-Symlink-Target (Optional) header string If present, this is a symlink object. The value is the relative path of the target object in the format <container>/<object>. X-Symlink-Target-Account (Optional) header string If present, and X-Symlink-Target is present, then this is a cross-account symlink to an object in the account specified in the value. See examples above. Create or update object metadata Creates or updates object metadata. To create or update custom metadata, use the X-Object-Meta-name header, where name is the name of the metadata item. Note Metadata keys (the name of the metadata) must be treated as case-insensitive at all times. These keys can contain ASCII 7-bit characters that are not control (0-31) characters, DEL, or a separator character, according to HTTP/1.1 . The underscore character is silently converted to a hyphen. In addition to the custom metadata, you can update the Content-Type, Content-Encoding, Content-Disposition, and X-Delete-At system metadata items. However you cannot update other system metadata, such as Content-Length or Last-Modified. You can use COPY as an alternate to the POST operation by copying to the same object. With the POST operation you must specify all metadata items, whereas with the COPY operation, you need to specify only changed or additional items. All metadata is preserved during the object copy. If you specify metadata on the request to copy the object, either PUT or COPY , the metadata overwrites any conflicting keys on the target (new) object. Note While using COPY instead of POST allows sending only a subset of the metadata, it carries the cost of reading and rewriting the entire contents of the object. A POST request deletes any existing custom metadata that you added with a previous PUT or POST request. Consequently, you must specify all custom metadata in the request. However, system metadata is unchanged by the POST request unless you explicitly supply it in a request header. You can also set the X-Delete-At or X-Delete-After header to define when to expire the object. When used as described in this section, the POST operation creates or replaces metadata. This form of the operation has no request body. There are alternate uses of the POST operation as follows: You can also use the form POST feature to upload objects. The POST operation when used with the bulk-delete query parameter can be used to delete multiple objects and containers in a single operation. The POST operation when used with the extract-archive query parameter can be used to upload an archive (tar file). The archive is then extracted to create objects. A POST request must not include X-Symlink-Target header. If it does then a 400 status code is returned and the object metadata is not modified. When a POST request is sent to a symlink, the metadata will be applied to the symlink, but the request will result in a 307 Temporary Redirect response to the client. The POST is never redirected to the target object, thus a GET/HEAD request to the symlink without symlink=get will not return the metadata that was sent as part of the POST request. Example requests and responses: Create object metadata: ``` curl -i $publicURL/marktwain/goodbye -X POST -H \"X-Auth-Token: $token\" -H \"X-Object-Meta-Book: GoodbyeColumbus\" ``` ```" }, { "data": "202 Accepted Content-Length: 76 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txb5fb5c91ba1f4f37bb648-0052d84b3f X-Openstack-Request-Id: txb5fb5c91ba1f4f37bb648-0052d84b3f Date: Thu, 16 Jan 2014 21:12:31 GMT <html> <h1>Accepted </h1> <p>The request is accepted for processing. </p> </html> ``` Update object metadata: ``` curl -i $publicURL/marktwain/goodbye -X POST -H \"X-Auth-Token: $token\" -H \"X-Object-Meta-Book: GoodbyeOldFriend\" ``` ``` HTTP/1.1 202 Accepted Content-Length: 76 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx5ec7ab81cdb34ced887c8-0052d84ca4 X-Openstack-Request-Id: tx5ec7ab81cdb34ced887c8-0052d84ca4 Date: Thu, 16 Jan 2014 21:18:28 GMT <html> <h1>Accepted </h1> <p>The request is accepted for processing. </p> </html> ``` Normal response codes: 202 | Name | In | Type | Description | |:-|:-|:--|:-| | account (Optional) | path | string | The unique name for the account. An account is also known as the project or tenant. | | container (Optional) | path | string | The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. | | object (Optional) | path | string | The unique name for the object. | | bulk-delete (Optional) | query | string | When the bulk-delete query parameter is present in the POST request, multiple objects or containers can be deleted with a single request. See Bulk Delete for how this feature is used. | | extract-archive (Optional) | query | string | When the extract-archive query parameter is present in the POST request, an archive (tar file) is uploaded and extracted to create multiple objects. See Extract Archive for how this feature is used. | | X-Auth-Token (Optional) | header | string | Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). | | X-Service-Token (Optional) | header | string | A service token. See OpenStack Service Using Composite Tokens for more information. | | X-Object-Meta-name (Optional) | header | string | The object metadata, where name is the name of the metadata item. You must specify an X-Object-Meta-name header for each metadata name item that you want to add or update. | | X-Delete-At (Optional) | header | integer | The date and time in UNIX Epoch time stamp format when the system removes the object. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. The value should be a positive integer corresponding to a time in the future. If both X-Delete-After and X-Delete-At are set then X-Delete-After takes precedence. | | X-Delete-After (Optional) | header | integer | The number of seconds after which the system removes the object. The value should be a positive integer. Internally, the Object Storage system uses this value to generate an X-Delete-At metadata item. If both X-Delete-After and X-Delete-At are set then X-Delete-After takes precedence. | | Content-Disposition (Optional) | header | string | If set, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. | | Content-Encoding (Optional) | header | string | If set, the value of the Content-Encoding metadata. | | Content-Type (Optional) | header | string | Sets the MIME type for the object. | | X-Trans-Id-Extra (Optional) | header | string | Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage" }, { "data": "The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name In Type Description account (Optional) path string The unique name for the account. An account is also known as the project or tenant. container (Optional) path string The unique (within an account) name for the container. The container name must be from 1 to 256 characters long and can start with any character and contain any pattern. Character set must be UTF-8. The container name cannot contain a slash (/) character because this character delimits the container and object name. For example, the path /v1/account/www/pages specifies the www container, not the www/pages container. object (Optional) path string The unique name for the object. bulk-delete (Optional) query string When the bulk-delete query parameter is present in the POST request, multiple objects or containers can be deleted with a single request. See Bulk Delete for how this feature is used. extract-archive (Optional) query string When the extract-archive query parameter is present in the POST request, an archive (tar file) is uploaded and extracted to create multiple objects. See Extract Archive for how this feature is used. X-Auth-Token (Optional) header string Authentication token. If you omit this header, your request fails unless the account owner has granted you access through an access control list (ACL). X-Service-Token (Optional) header string A service token. See OpenStack Service Using Composite Tokens for more information. X-Object-Meta-name (Optional) header string The object metadata, where name is the name of the metadata item. You must specify an X-Object-Meta-name header for each metadata name item that you want to add or update. X-Delete-At (Optional) header integer The date and time in UNIX Epoch time stamp format when the system removes the object. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. The value should be a positive integer corresponding to a time in the future. If both X-Delete-After and X-Delete-At are set then X-Delete-After takes precedence. X-Delete-After (Optional) header integer The number of seconds after which the system removes the object. The value should be a positive integer. Internally, the Object Storage system uses this value to generate an X-Delete-At metadata item. If both X-Delete-After and X-Delete-At are set then X-Delete-After takes precedence. Content-Disposition (Optional) header string If set, specifies the override behavior for the browser. For example, this header might specify that the browser use a download program to save this file rather than show the file, which is the default. Content-Encoding (Optional) header string If set, the value of the Content-Encoding metadata. Content-Type (Optional) header string Sets the MIME type for the object. X-Trans-Id-Extra (Optional) header string Extra transaction information. Use the X-Trans-Id-Extra request header to include extra information to help you debug any errors that might occur with large object upload and other Object Storage" }, { "data": "The server appends the first 32 characters of the X-Trans-Id-Extra request header value to the transaction ID value in the generated X-Trans-Id response header. You must UTF-8-encode and then URL-encode the extra transaction information before you include it in the X-Trans-Id-Extra request header. For example, you can include extra transaction information when you upload large objects such as images. When you upload each segment and the manifest, include the same value in the X-Trans-Id-Extra request header. If an error occurs, you can find all requests that are related to the large object upload in the Object Storage logs. You can also use X-Trans-Id-Extra strings to help operators debug requests that fail to receive responses. The operator can search for the extra information in the logs. | Name | In | Type | Description | |:|:-|:--|:| | Date | header | string | The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. | | X-Timestamp | header | integer | The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. | | Content-Length | header | string | If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. | | Content-Type (Optional) | header | string | If present, this value is the MIME type of the informational or error text in the response body. | | X-Trans-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. | | X-Openstack-Request-Id | header | string | A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) | Name In Type Description Date header string The date and time the system responded to the request, using the preferred format of RFC 7231 as shown in this example Thu, 16 Jun 2016 15:10:38 GMT. The time is always in UTC. X-Timestamp header integer The date and time in UNIX Epoch time stamp format when the account, container, or object was initially created as a current version. For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT. Content-Length header string If the operation succeeds, this value is zero (0) or the length of informational or error text in the response body. Content-Type (Optional) header string If present, this value is the MIME type of the informational or error text in the response body. X-Trans-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. X-Openstack-Request-Id header string A unique transaction ID for this request. Your service provider might need this value if you report a problem. (same as X-Trans-Id) If configured, lists endpoints for an account. List endpoints Lists endpoints for an object, account, or container. When the cloud provider enables middleware to list the /endpoints/ path, software that needs data location information can use this call to avoid network overhead. The cloud provider can map the /endpoints/ path to another resource, so this exact resource might vary from provider to provider. Because it goes straight to the middleware, the call is not authenticated, so be sure you have tightly secured the environment and network when using this call. Error response codes:201," } ]
{ "category": "Runtime", "file_name": "overview_auth.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift supports a number of auth systems that share the following common characteristics: The authentication/authorization part can be an external system or a subsystem run within Swift as WSGI middleware The user of Swift passes in an auth token with each request Swift validates each token with the external auth system or auth subsystem and caches the result The token does not change from request to request, but does expire The token can be passed into Swift using the X-Auth-Token or the X-Storage-Token header. Both have the same format: just a simple string representing the token. Some auth systems use UUID tokens, some an MD5 hash of something unique, some use something else but the salient point is that the token is a string which can be sent as-is back to the auth system for validation. Swift will make calls to the auth system, giving the auth token to be validated. For a valid token, the auth system responds with an overall expiration time in seconds from now. To avoid the overhead in validating the same token over and over again, Swift will cache the token for a configurable time, but no longer than the expiration time. The Swift project includes two auth systems: TempAuth Keystone Auth It is also possible to write your own auth system as described in Extending Auth. TempAuth is used primarily in Swifts functional test environment and can be used in other test environments (such as SAIO (Swift All In One)). It is not recommended to use TempAuth in a production system. However, TempAuth is fully functional and can be used as a model to develop your own auth system. TempAuth has the concept of admin and non-admin users within an account. Admin users can do anything within the account. Non-admin users can only perform read operations. However, some privileged metadata such as X-Container-Sync-Key is not accessible to non-admin users. Users with the special group .reseller_admin can operate on any account. For an example usage please see swift.common.middleware.tempauth. If a request is coming from a reseller the auth system sets the request environ reseller_request to True. This can be used by other middlewares. Other users may be granted the ability to perform operations on an account or container via ACLs. TempAuth supports two types of ACL: Per container ACLs based on the containers X-Container-Read and X-Container-Write metadata. See Container ACLs for more information. Per account ACLs based on the accounts X-Account-Access-Control metadata. For more information see Account ACLs. TempAuth will now allow OPTIONS requests to go through without a token. The TempAuth middleware is responsible for creating its own tokens. A user makes a request containing their username and password and TempAuth responds with a token. This token is then used to perform subsequent requests on the users account, containers and objects. Swift is able to authenticate against OpenStack Keystone. In this environment, Keystone is responsible for creating and validating tokens. The KeystoneAuth middleware is responsible for implementing the auth system within Swift as described here. The KeystoneAuth middleware supports per container based ACLs on the containers X-Container-Read and X-Container-Write metadata. For more information see Container ACLs. The account-level ACL is not supported by Keystone" }, { "data": "In order to use the keystoneauth middleware the auth_token middleware from KeystoneMiddleware will need to be configured. The authtoken middleware performs the authentication token validation and retrieves actual user authentication information. It can be found in the KeystoneMiddleware distribution. The KeystoneAuth middleware performs authorization and mapping the Keystone roles to Swifts ACLs. Configuring Swift to use Keystone is relatively straightforward. The first step is to ensure that you have the auth_token middleware installed. It can either be dropped in your python path or installed via the KeystoneMiddleware package. You need at first make sure you have a service endpoint of type object-store in Keystone pointing to your Swift proxy. For example having this in your /etc/keystone/default_catalog.templates ``` catalog.RegionOne.object_store.name = Swift Service catalog.RegionOne.objectstore.publicURL = http://swiftproxy:8080/v1/AUTH$(tenant_id)s catalog.RegionOne.object_store.adminURL = http://swiftproxy:8080/ catalog.RegionOne.objectstore.internalURL = http://swiftproxy:8080/v1/AUTH$(tenant_id)s ``` On your Swift proxy server you will want to adjust your main pipeline and add auth_token and keystoneauth in your /etc/swift/proxy-server.conf like this ``` [pipeline:main] pipeline = [....] authtoken keystoneauth proxy-logging proxy-server ``` add the configuration for the authtoken middleware: ``` [filter:authtoken] paste.filterfactory = keystonemiddleware.authtoken:filter_factory wwwauthenticateuri = http://keystonehost:5000/ auth_url = http://keystonehost:5000/ auth_plugin = password projectdomainid = default userdomainid = default project_name = service username = swift password = password cache = swift.cache includeservicecatalog = False delayauthdecision = True ``` The actual values for these variables will need to be set depending on your situation, but in short: wwwauthenticateuri should point to a Keystone service from which users may retrieve tokens. This value is used in the WWW-Authenticate header that auth_token sends with any denial response. auth_url points to the Keystone Admin service. This information is used by the middleware to actually query Keystone about the validity of the authentication tokens. It is not necessary to append any Keystone API version number to this URI. The auth credentials (projectdomainid, userdomainid, username, project_name, password) will be used to retrieve an admin token. That token will be used to authorize user tokens behind the scenes. These credentials must match the Keystone credentials for the Swift service. The example values shown here assume a user named swift with admin role on a project named service, both being in the Keystone domain with id default. Refer to the KeystoneMiddleware documentation for other examples. cache is set to swift.cache. This means that the middleware will get the Swift memcache from the request environment. includeservicecatalog defaults to True if not set. This means that when validating a token, the service catalog is retrieved and stored in the X-Service-Catalog header. This is required if you use access-rules in Application Credentials. You may also need to increase maxheadersize. Note The authtoken config variable delayauthdecision must be set to True. The default is False, but that breaks public access, StaticWeb, FormPost, TempURL, and authenticated capabilities requests (using Discoverability). and you can finally add the keystoneauth configuration. Here is a simple configuration: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` Use an appropriate list of roles in operator_roles. For example, in some systems, the role member or Member is used to indicate that the user is allowed to operate on project resources. Some OpenStack services such as Cinder and Glance may use a service" }, { "data": "In this mode, you configure a separate account where the service stores project data that it manages. This account is not used directly by the end-user. Instead, all access is done through the service. To access the service account, the service must present two tokens: one from the end-user and another from its own service user. Only when both tokens are present can the account be accessed. This section describes how to set the configuration options to correctly control access to both the normal and service accounts. In this example, end users use the AUTH_ prefix in account names, whereas services use the SERVICE_ prefix: ``` [filter:keystoneauth] use = egg:swift#keystoneauth reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator SERVICEserviceroles = service ``` The actual values for these variable will need to be set depending on your situation as follows: The first item in the reseller_prefix list must match Keystones endpoint (see /etc/keystone/default_catalog.templates above). Normally this is AUTH. The second item in the reseller_prefix list is the prefix used by the OpenStack services(s). You must configure this value (SERVICE in the example) with whatever the other OpenStack service(s) use. Set the operator_roles option to contain a role or roles that end-users have on projects they use. Set the SERVICEserviceroles value to a role or roles that only the OpenStack service user has. Do not use a role that is assigned to normal end users. In this example, the role service is used. The service user is granted this role to a single project only. You do not need to make the service user a member of every project. This configuration works as follows: The end-user presents a user token to an OpenStack service. The service then makes a Swift request to the account with the SERVICE prefix. The service forwards the original user token with the request. It also adds its own service token. Swift validates both tokens. When validated, the user token gives the admin or swiftoperator role(s). When validated, the service token gives the service role. Swift interprets the above configuration as follows: Did the user token provide one of the roles listed in operator_roles? Did the service token have the service role as described by the SERVICEserviceroles options. If both conditions are met, the request is granted. Otherwise, Swift rejects the request. In the above example, all services share the same account. You can separate each service into its own account. For example, the following provides a dedicated account for each of the Glance and Cinder services. In addition, you must assign the glanceservice and cinderservice to the appropriate service users: ``` [filter:keystoneauth] use = egg:swift#keystoneauth reseller_prefix = AUTH, IMAGE, VOLUME operator_roles = admin, swiftoperator IMAGEserviceroles = glance_service VOLUMEserviceroles = cinder_service ``` By default the only users able to perform operations (e.g. create a container) on an account are those having a Keystone role for the corresponding Keystone project that matches one of the roles specified in the operator_roles option. Users who have one of the operator_roles will be able to set container ACLs to grant other users permission to read and/or write objects in specific containers, using X-Container-Read and X-Container-Write headers respectively. In addition to the ACL formats described here, keystoneauth supports ACLs using the format: ```" }, { "data": "``` where otherprojectid is the UUID of a Keystone project and otheruserid is the UUID of a Keystone user. This will allow the other user to access a container provided their token is scoped on the other project. Both otherprojectid and otheruserid may be replaced with the wildcard character * which will match any project or user respectively. Be sure to use Keystone UUIDs rather than names in container ACLs. Note For backwards compatibility, keystoneauth will by default grant container ACLs expressed as otherprojectname:otherusername (i.e. using Keystone names rather than UUIDs) in the special case when both the other project and the other user are in Keystones default domain and the project being accessed is also in the default domain. For further information see KeystoneAuth Users with the Keystone role defined in reselleradminrole (ResellerAdmin by default) can operate on any account. The auth system sets the request environ reseller_request to True if a request is coming from a user with this role. This can be used by other middlewares. Some common mistakes can result in API requests failing when first deploying keystone with Swift: Incorrect configuration of the Swift endpoint in the Keystone service. By default, keystoneauth expects the account part of a URL to have the form AUTH<keystoneprojectid>. Sometimes the AUTH prefix is missed when configuring Swift endpoints in Keystone, as described in the Install Guide. This is easily diagnosed by inspecting the proxy-server log file for a failed request URL and checking that the URL includes the AUTH_ prefix (or whatever reseller prefix may have been configured for keystoneauth): ``` GOOD: proxy-server: 127.0.0.1 127.0.0.1 07/Sep/2016/16/06/58 HEAD /v1/AUTH_cfb8d9d45212408b90bc0776117aec9e HTTP/1.0 204 ... BAD: proxy-server: 127.0.0.1 127.0.0.1 07/Sep/2016/16/07/35 HEAD /v1/cfb8d9d45212408b90bc0776117aec9e HTTP/1.0 403 ... ``` Incorrect configuration of the authtoken middleware options in the Swift proxy server. The authtoken middleware communicates with the Keystone service to validate tokens that are presented with client requests. To do this authtoken must authenticate itself with Keystone using the credentials configured in the [filter:authtoken] section of /etc/swift/proxy-server.conf. Errors in these credentials can result in authtoken failing to validate tokens and may be revealed in the proxy server logs by a message such as: ``` proxy-server: Identity server rejected authorization ``` Note More detailed log messaging may be seen by setting the authtoken option log_level = debug. The authtoken configuration options may be checked by attempting to use them to communicate directly with Keystone using an openstack command line. For example, given the authtoken configuration sample shown in Configuring Swift to use Keystone, the following command should return a service catalog: ``` openstack --os-identity-api-version=3 --os-auth-url=http://keystonehost:5000/ \\ --os-username=swift --os-user-domain-id=default \\ --os-project-name=service --os-project-domain-id=default \\ --os-password=password catalog show object-store ``` If this openstack command fails then it is likely that there is a problem with the authtoken configuration. TempAuth is written as wsgi middleware, so implementing your own auth is as easy as writing new wsgi middleware, and plugging it in to the proxy server. See Auth Server and Middleware for detailed information on extending the auth system. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "overview_wsgi_management.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift development currently targets Ubuntu Server 22.04, but should work on most Linux platforms. Swift is written in Python and has these dependencies: Python (2.7 or 3.6-3.10) rsync 3.x liberasurecode The Python packages listed in the requirements file Testing additionally requires the test dependencies Testing requires these distribution packages To get started with development with Swift, or to just play around, the following docs will be useful: Swift All in One - Set up a VM with Swift installed Development Guidelines First Contribution to Swift Associated Projects There are many clients in the ecosystem. The official CLI and SDK is python-swiftclient. Source code Python Package Index If you want to set up and configure Swift for a production cluster, the following doc should be useful: Object Storage Install Guide Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "proxy.html#module-swift.proxy.server.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "This page contains project-specific documentation for using OpenStack services and libraries. Refer to the language bindings list for Python client library documentation and the Unified OpenStack command line client. Documentation treated like code, powered by the community - interested? Currently viewing which is the current supported release. The OpenStack project is provided under the Apache 2.0 license. Openstack.org is powered by VEXXHOST ." } ]
{ "category": "Runtime", "file_name": "pagination.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "When you create an object or update its metadata, you can optionally set the Content-Encoding metadata. This metadata enables you to indicate that the object content is compressed without losing the identity of the underlying media type (Content-Type) of the file, such as a video. Example Content-Encoding header request: HTTP This example assigns an attachment type to the Content-Encoding header that indicates how the file is downloaded: ``` PUT /<api version>/<account>/<container>/<object> HTTP/1.1 Host: storage.clouddrive.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb Content-Type: video/mp4 Content-Encoding: gzip ``` Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "proxy.html#proxy-controllers.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "This page contains project-specific documentation for using OpenStack services and libraries. Refer to the language bindings list for Python client library documentation and the Unified OpenStack command line client. Documentation treated like code, powered by the community - interested? Currently viewing which is the current supported release. The OpenStack project is provided under the Apache 2.0 license. Openstack.org is powered by VEXXHOST ." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#a-note-on-swift-core-maintainers.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Effective code review is a skill like any other professional skill you develop with experience. Effective code review requires trust. No one is perfect. Everyone makes mistakes. Trust builds over time. This document will enumerate behaviors commonly observed and associated with competent reviews of changes purposed to the Swift code base. No one is expected to follow these steps. Guidelines are not rules, not all behaviors will be relevant in all situations. Code review is collaboration, not judgement. Alistair Coles You will need to have a copy of the change in an environment where you can freely edit and experiment with the code in order to provide a non-superficial review. Superficial reviews are not terribly helpful. Always try to be helpful. ;) Check out the change so that you may begin. Commonly, git review -d <change-id> Imagine that you submit a patch to Swift, and a reviewer starts to take a look at it. Your commit message on the patch claims that it fixes a bug or adds a feature, but as soon as the reviewer downloads it locally and tries to test it, a severe and obvious error shows up. Something like a syntax error or a missing dependency. Did you even run this? is the review comment all contributors dread. Reviewers in particular need to be fearful merging changes that just dont work - or at least fail in frequently common enough scenarios to be considered horribly broken. A comment in our review that says roughly I ran this on my machine and observed description of behavior change is supposed to achieve is the most powerful defense we have against the terrible scorn from our fellow Swift developers and operators when we accidentally merge bad code. If youre doing a fair amount of reviews - you will participate in merging a change that will break my clusters - its cool - Ill do it to you at some point too (sorry about that). But when either of us go look at the reviews to understand the process gap that allowed this to happen - it better not be just because we were too lazy to check it out and run it before it got merged. Or be warned, you may receive, the dreaded Did you even run this? Im sorry, I know its rough. ;) Saying that should rarely happen is the same as saying that will happen Douglas Crockford Scale is an amazingly abusive partner. If you contribute changes to Swift your code is running - in production - at scale - and your bugs cannot hide. I wish on all of us that our bugs may be exceptionally rare - meaning they only happen in extremely unlikely edge cases. For example, bad things that happen only 1 out of every 10K times an op is performed will be discovered in minutes. Bad things that happen only 1 out of every one billion times something happens will be observed - by multiple deployments - over the course of a release. Bad things that happen 1/100 times some op is performed are considered horribly broken. Tests must exhaustively exercise possible scenarios. Every system call and network connection will raise an error and timeout - where will that Exception be caught? Yes, I know Gerrit does this already. You can do it" }, { "data": "You might not need to re-run all the tests on your machine - it depends on the change. But, if youre not sure which will be most useful - running all of them best - unit - functional - probe. If you cant reliably get all tests passing in your development environment you will not be able to do effective reviews. Whatever tests/suites you are able to exercise/validate on your machine against your config you should mention in your review comments so that other reviewers might choose to do other testing locally when they have the change checked out. e.g. I went ahead and ran probe/testobjectmetadata_replication.py on my machine with both syncmethod = rsync and syncmethod = ssync - that works for me - but I didnt try it with objectpostas_copy = false Style is an important component to review. The goal is maintainability. However, keep in mind that generally style, readability and maintainability are orthogonal to the suitability of a change for merge. A critical bug fix may be a well written pythonic masterpiece of style - or it may be a hack-y ugly mess that will absolutely need to be cleaned up at some point - but it absolutely should merge because: CRITICAL. BUG. FIX. You should comment inline to praise code that is obvious. You should comment inline to highlight code that you found to be obfuscated. Unfortunately readability is often subjective. We should remember that its probably just our own personal preference. Rather than a comment that says You should use a list comprehension here - rewrite the code as a list comprehension, run the specific tests that hit the relevant section to validate your code is correct, then leave a comment that says: I find this more readable: diff with working tested code If the author (or another reviewer) agrees - its possible the change will get updated to include that improvement before it is merged; or it may happen in a follow-up change. However, remember that style is non-material - it is useful to provide (via diff) suggestions to improve maintainability as part of your review - but if the suggestion is functionally equivalent - it is by definition optional. Read the commit message thoroughly before you begin the review. Commit messages must answer the why and the what for - more so than the how or what it does. Commonly this will take the form of a short description: What is broken - without this change What is impossible to do with Swift - without this change What is slower/worse/harder - without this change If youre not able to discern why a change is being made or how it would be used - you may have to ask for more details before you can successfully review it. Commit messages need to have a high consistent quality. While many things under source control can be fixed and improved in a follow-up change - commit messages are forever. Luckily its easy to fix minor mistakes using the in-line edit feature in Gerrit! If you can avoid ever having to ask someone to change a commit message you will find yourself an amazingly happier and more productive reviewer. Also commit messages should follow the OpenStack Commit Message guidelines, including references to relevant impact tags or bug numbers. You should hand out links to the OpenStack Commit Message guidelines liberally via comments when fixing commit messages during review. Here you go: GitCommitMessages New tests should be added for all code" }, { "data": "Historically you should expect good changes to have a diff line count ratio of at least 2:1 tests to code. Even if a change has to fix a lot of existing tests, if a change does not include any new tests it probably should not merge. If a change includes a good ratio of test changes and adds new tests - you should say so in your review comments. If it does not - you should write some! and offer them to the patch author as a diff indicating to them that something like these tests Im providing as an example will need to be included in this change before it is suitable to merge. Bonus points if you include suggestions for the author as to how they might improve or expand upon the tests stubs you provide. Be very careful about asking an author to add a test for a small change before attempting to do so yourself. Its quite possible there is a lack of existing test infrastructure needed to develop a concise and clear test - the author of a small change may not be the best person to introduce a large amount of new test infrastructure. Also, most of the time remember its harder to write the test than the change - if the author is unable to develop a test for their change on their own you may prevent a useful change from being merged. At a minimum you should suggest a specific unit test that you think they should be able to copy and modify to exercise the behavior in their change. If youre not sure if such a test exists - replace their change with an Exception and run tests until you find one that blows up. Most changes should include documentation. New functions and code should have Docstrings. Tests should obviate new or changed behaviors with descriptive and meaningful phrases. New features should include changes to the documentation tree. New config options should be documented in example configs. The commit message should document the change for the change log. Always point out typos or grammar mistakes when you see them in review, but also consider that if you were able to recognize the intent of the statement - documentation with typos may be easier to iterate and improve on than nothing. If a change does not have adequate documentation it may not be suitable to merge. If a change includes incorrect or misleading documentation or is contrary to existing documentation is probably is not suitable to merge. Every change could have better documentation. Like with tests, a patch isnt done until it has docs. Any patch that adds a new feature, changes behavior, updates configs, or in any other way is different than previous behavior requires docs. manpages, sample configs, docstrings, descriptive prose in the source tree, etc. Reviews have been shown to provide many benefits - one of which is shared ownership. After providing a positive review you should understand how the change works. Doing this will probably require you to play with the change. You might functionally test the change in various scenarios. You may need to write a new unit test to validate the change will degrade gracefully under failure. You might have to write a script to exercise the change under some superficial load. You might have to break the change and validate the new tests fail and provide useful" }, { "data": "You might have to step through some critical section of the code in a debugger to understand when all the possible branches are exercised in tests. When youre done with your review an artifact of your effort will be observable in the piles of code and scripts and diffs you wrote while reviewing. You should make sure to capture those artifacts in a paste or gist and include them in your review comments so that others may reference them. e.g. When I broke the change like this: diff it blew up like this: unit test failure Its not uncommon that a review takes more time than writing a change - hopefully the author also spent as much time as you did validating their change but thats not really in your control. When you provide a positive review you should be sure you understand the change - even seemingly trivial changes will take time to consider the ramifications. Leave. Lots. Of. Comments. A popular web comic has stated that WTFs/Minute is the only valid measurement of code quality. If something initially strikes you as questionable - you should jot down a note so you can loop back around to it. However, because of the distributed nature of authors and reviewers its imperative that you try your best to answer your own questions as part of your review. Do not say Does this blow up if it gets called when xyz - rather try and find a test that specifically covers that condition and mention it in the comment so others can find it more quickly. Or if you can find no such test, add one to demonstrate the failure, and include a diff in a comment. Hopefully you can say I thought this would blow up, so I wrote this test, but it seems fine. But if your initial reaction is I dont understand this or How does this even work? you should notate it and explain whatever you were able to figure out in order to help subsequent reviewers more quickly identify and grok the subtle or complex issues. Because you will be leaving lots of comments - many of which are potentially not highlighting anything specific - it is VERY important to leave a good summary. Your summary should include details of how you reviewed the change. You may include what you liked most, or least. If you are leaving a negative score ideally you should provide clear instructions on how the change could be modified such that it would be suitable for merge - again diffs work best. Scoring is subjective. Try to realize youre making a judgment call. A positive score means you believe Swift would be undeniably better off with this code merged than it would be going one more second without this change running in production immediately. It is indeed high praise - you should be sure. A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks. It is a surprisingly difficult chore to be confident about the value of unproven code or a not well understood use-case in an uncertain world, and unfortunately all too easy with a thorough review to uncover our defects, and be reminded of the risk of" }, { "data": "Reviewers must try very hard first and foremost to keep master stable. If you can demonstrate a change has an incorrect behavior its almost without exception that the change must be revised to fix the defect before merging rather than letting it in and having to also file a bug. Every commit must be deployable to production. Beyond that - almost any change might be merge-able depending on its merits! Here are some tips you might be able to use to find more changes that should merge! Fixing bugs is HUGELY valuable - the only thing which has a higher cost than the value of fixing a bug - is adding a new bug - if its broken and this change makes it fixed (without breaking anything else) you have a winner! Features are INCREDIBLY difficult to justify their value against the cost of increased complexity, lowered maintainability, risk of regression, or new defects. Try to focus on what is impossible without the feature - when you make the impossible possible, things are better. Make things better. Purely test/doc changes, complex refactoring, or mechanical cleanups are quite nuanced because theres less concrete objective value. Ive seen lots of these kind of changes get lost to the backlog. Ive also seen some success where multiple authors have collaborated to push-over a change rather than provide a review ultimately resulting in a quorum of three or more authors who all agree there is a lot of value in the change - however subjective. Because the bar is high - most reviews will end with a negative score. However, for non-material grievances (nits) - you should feel confident in a positive review if the change is otherwise complete correct and undeniably makes Swift better (not perfect, better). If you see something worth fixing you should point it out in review comments, but when applying a score consider if it need be fixed before the change is suitable to merge vs. fixing it in a follow up change? Consider if the change makes Swift so undeniably better and it was deployed in production without making any additional changes would it still be correct and complete? Would releasing the change to production without any additional follow up make it more difficult to maintain and continue to improve Swift? Endeavor to leave a positive or negative score on every change you review. Use your best judgment. Swift Core maintainers may provide positive reviews scores that look different from your reviews - a +2 instead of a +1. But its exactly the same as your +1. It means the change has been thoroughly and positively reviewed. The only reason its different is to help identify changes which have received multiple competent and positive reviews. If you consistently provide competent reviews you run a VERY high risk of being approached to have your future positive review scores changed from a +1 to +2 in order to make it easier to identify changes which need to get merged. Ideally a review from a core maintainer should provide a clear path forward for the patch author. If you dont know how to proceed respond to the reviewers comments on the change and ask for help. Wed love to try and help. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "ratelimit.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Rate limiting in Swift is implemented as a pluggable middleware. Rate limiting is performed on requests that result in database writes to the account and container sqlite dbs. It uses memcached and is dependent on the proxy servers having highly synchronized time. The rate limits are limited by the accuracy of the proxy server clocks. All configuration is optional. If no account or container limits are provided there will be no rate limiting. Configuration available: | 0 | 1 | 2 | |:|:--|:--| | Option | Default | Description | | clock_accuracy | 1000 | Represents how accurate the proxy servers system clocks are with each other. 1000 means that all the proxies clock are accurate to each other within 1 millisecond. No ratelimit should be higher than the clock accuracy. | | maxsleeptimeseconds | 60 | App will immediately return a 498 response if the necessary sleep time ever exceeds the given maxsleeptimeseconds. | | logsleeptime_seconds | 0 | To allow visibility into rate limiting set this value > 0 and all sleeps greater than the number will be logged. | | ratebufferseconds | 5 | Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. | | accountratelimit | 0 | If set, will limit PUT and DELETE requests to /accountname/container_name. Number is in requests per second. | | containerratelimitsize | | When set with containerratelimitx = r: for containers of size x, limit requests per second to r. Will limit PUT, DELETE, and POST requests to /a/c/o. | | containerlistingratelimitsize | | When set with containerlistingratelimitx = r: for containers of size x, limit listing requests per second to r. Will limit GET requests to /a/c. | Option Default Description clock_accuracy 1000 Represents how accurate the proxy servers system clocks are with each other. 1000 means that all the proxies clock are accurate to each other within 1 millisecond. No ratelimit should be higher than the clock accuracy. maxsleeptime_seconds 60 App will immediately return a 498 response if the necessary sleep time ever exceeds the given maxsleeptime_seconds. logsleeptime_seconds 0 To allow visibility into rate limiting set this value > 0 and all sleeps greater than the number will be" }, { "data": "ratebufferseconds 5 Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. account_ratelimit 0 If set, will limit PUT and DELETE requests to /accountname/containername. Number is in requests per second. containerratelimitsize When set with containerratelimitx = r: for containers of size x, limit requests per second to r. Will limit PUT, DELETE, and POST requests to /a/c/o. containerlistingratelimit_size When set with containerlistingratelimit_x = r: for containers of size x, limit listing requests per second to r. Will limit GET requests to /a/c. The container rate limits are linearly interpolated from the values given. A sample container rate limiting could be: containerratelimit100 = 100 containerratelimit200 = 50 containerratelimit500 = 20 This would result in | 0 | 1 | |:|:| | Container Size | Rate Limit | | 0-99 | No limiting | | 100 | 100 | | 150 | 75 | | 500 | 20 | | 1000 | 20 | Container Size Rate Limit 0-99 No limiting 100 100 150 75 500 20 1000 20 The above ratelimiting is to prevent the many writes to a single container bottleneck from causing a problem. There could also be a problem where a single account is just using too much of the clusters resources. In this case, the container ratelimits may not help because the customer could be doing thousands of reqs/sec to distributed containers each getting a small fraction of the total so those limits would never trigger. If a system administrator notices this, he/she can set the X-Account-Sysmeta-Global-Write-Ratelimit on an account and that will limit the total number of write requests (PUT, POST, DELETE, COPY) that account can do for the whole account. This limit will be in addition to the applicable account/container limits from above. This header will be hidden from the user, because of the gatekeeper middleware, and can only be set using a direct client to the account nodes. It accepts a float value and will only limit requests if the value is > 0. To blacklist or whitelist an account set: X-Account-Sysmeta-Global-Write-Ratelimit: BLACKLIST or X-Account-Sysmeta-Global-Write-Ratelimit: WHITELIST in the account headers. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#checkout-the-change.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Effective code review is a skill like any other professional skill you develop with experience. Effective code review requires trust. No one is perfect. Everyone makes mistakes. Trust builds over time. This document will enumerate behaviors commonly observed and associated with competent reviews of changes purposed to the Swift code base. No one is expected to follow these steps. Guidelines are not rules, not all behaviors will be relevant in all situations. Code review is collaboration, not judgement. Alistair Coles You will need to have a copy of the change in an environment where you can freely edit and experiment with the code in order to provide a non-superficial review. Superficial reviews are not terribly helpful. Always try to be helpful. ;) Check out the change so that you may begin. Commonly, git review -d <change-id> Imagine that you submit a patch to Swift, and a reviewer starts to take a look at it. Your commit message on the patch claims that it fixes a bug or adds a feature, but as soon as the reviewer downloads it locally and tries to test it, a severe and obvious error shows up. Something like a syntax error or a missing dependency. Did you even run this? is the review comment all contributors dread. Reviewers in particular need to be fearful merging changes that just dont work - or at least fail in frequently common enough scenarios to be considered horribly broken. A comment in our review that says roughly I ran this on my machine and observed description of behavior change is supposed to achieve is the most powerful defense we have against the terrible scorn from our fellow Swift developers and operators when we accidentally merge bad code. If youre doing a fair amount of reviews - you will participate in merging a change that will break my clusters - its cool - Ill do it to you at some point too (sorry about that). But when either of us go look at the reviews to understand the process gap that allowed this to happen - it better not be just because we were too lazy to check it out and run it before it got merged. Or be warned, you may receive, the dreaded Did you even run this? Im sorry, I know its rough. ;) Saying that should rarely happen is the same as saying that will happen Douglas Crockford Scale is an amazingly abusive partner. If you contribute changes to Swift your code is running - in production - at scale - and your bugs cannot hide. I wish on all of us that our bugs may be exceptionally rare - meaning they only happen in extremely unlikely edge cases. For example, bad things that happen only 1 out of every 10K times an op is performed will be discovered in minutes. Bad things that happen only 1 out of every one billion times something happens will be observed - by multiple deployments - over the course of a release. Bad things that happen 1/100 times some op is performed are considered horribly broken. Tests must exhaustively exercise possible scenarios. Every system call and network connection will raise an error and timeout - where will that Exception be caught? Yes, I know Gerrit does this already. You can do it" }, { "data": "You might not need to re-run all the tests on your machine - it depends on the change. But, if youre not sure which will be most useful - running all of them best - unit - functional - probe. If you cant reliably get all tests passing in your development environment you will not be able to do effective reviews. Whatever tests/suites you are able to exercise/validate on your machine against your config you should mention in your review comments so that other reviewers might choose to do other testing locally when they have the change checked out. e.g. I went ahead and ran probe/testobjectmetadata_replication.py on my machine with both syncmethod = rsync and syncmethod = ssync - that works for me - but I didnt try it with objectpostas_copy = false Style is an important component to review. The goal is maintainability. However, keep in mind that generally style, readability and maintainability are orthogonal to the suitability of a change for merge. A critical bug fix may be a well written pythonic masterpiece of style - or it may be a hack-y ugly mess that will absolutely need to be cleaned up at some point - but it absolutely should merge because: CRITICAL. BUG. FIX. You should comment inline to praise code that is obvious. You should comment inline to highlight code that you found to be obfuscated. Unfortunately readability is often subjective. We should remember that its probably just our own personal preference. Rather than a comment that says You should use a list comprehension here - rewrite the code as a list comprehension, run the specific tests that hit the relevant section to validate your code is correct, then leave a comment that says: I find this more readable: diff with working tested code If the author (or another reviewer) agrees - its possible the change will get updated to include that improvement before it is merged; or it may happen in a follow-up change. However, remember that style is non-material - it is useful to provide (via diff) suggestions to improve maintainability as part of your review - but if the suggestion is functionally equivalent - it is by definition optional. Read the commit message thoroughly before you begin the review. Commit messages must answer the why and the what for - more so than the how or what it does. Commonly this will take the form of a short description: What is broken - without this change What is impossible to do with Swift - without this change What is slower/worse/harder - without this change If youre not able to discern why a change is being made or how it would be used - you may have to ask for more details before you can successfully review it. Commit messages need to have a high consistent quality. While many things under source control can be fixed and improved in a follow-up change - commit messages are forever. Luckily its easy to fix minor mistakes using the in-line edit feature in Gerrit! If you can avoid ever having to ask someone to change a commit message you will find yourself an amazingly happier and more productive reviewer. Also commit messages should follow the OpenStack Commit Message guidelines, including references to relevant impact tags or bug numbers. You should hand out links to the OpenStack Commit Message guidelines liberally via comments when fixing commit messages during review. Here you go: GitCommitMessages New tests should be added for all code" }, { "data": "Historically you should expect good changes to have a diff line count ratio of at least 2:1 tests to code. Even if a change has to fix a lot of existing tests, if a change does not include any new tests it probably should not merge. If a change includes a good ratio of test changes and adds new tests - you should say so in your review comments. If it does not - you should write some! and offer them to the patch author as a diff indicating to them that something like these tests Im providing as an example will need to be included in this change before it is suitable to merge. Bonus points if you include suggestions for the author as to how they might improve or expand upon the tests stubs you provide. Be very careful about asking an author to add a test for a small change before attempting to do so yourself. Its quite possible there is a lack of existing test infrastructure needed to develop a concise and clear test - the author of a small change may not be the best person to introduce a large amount of new test infrastructure. Also, most of the time remember its harder to write the test than the change - if the author is unable to develop a test for their change on their own you may prevent a useful change from being merged. At a minimum you should suggest a specific unit test that you think they should be able to copy and modify to exercise the behavior in their change. If youre not sure if such a test exists - replace their change with an Exception and run tests until you find one that blows up. Most changes should include documentation. New functions and code should have Docstrings. Tests should obviate new or changed behaviors with descriptive and meaningful phrases. New features should include changes to the documentation tree. New config options should be documented in example configs. The commit message should document the change for the change log. Always point out typos or grammar mistakes when you see them in review, but also consider that if you were able to recognize the intent of the statement - documentation with typos may be easier to iterate and improve on than nothing. If a change does not have adequate documentation it may not be suitable to merge. If a change includes incorrect or misleading documentation or is contrary to existing documentation is probably is not suitable to merge. Every change could have better documentation. Like with tests, a patch isnt done until it has docs. Any patch that adds a new feature, changes behavior, updates configs, or in any other way is different than previous behavior requires docs. manpages, sample configs, docstrings, descriptive prose in the source tree, etc. Reviews have been shown to provide many benefits - one of which is shared ownership. After providing a positive review you should understand how the change works. Doing this will probably require you to play with the change. You might functionally test the change in various scenarios. You may need to write a new unit test to validate the change will degrade gracefully under failure. You might have to write a script to exercise the change under some superficial load. You might have to break the change and validate the new tests fail and provide useful" }, { "data": "You might have to step through some critical section of the code in a debugger to understand when all the possible branches are exercised in tests. When youre done with your review an artifact of your effort will be observable in the piles of code and scripts and diffs you wrote while reviewing. You should make sure to capture those artifacts in a paste or gist and include them in your review comments so that others may reference them. e.g. When I broke the change like this: diff it blew up like this: unit test failure Its not uncommon that a review takes more time than writing a change - hopefully the author also spent as much time as you did validating their change but thats not really in your control. When you provide a positive review you should be sure you understand the change - even seemingly trivial changes will take time to consider the ramifications. Leave. Lots. Of. Comments. A popular web comic has stated that WTFs/Minute is the only valid measurement of code quality. If something initially strikes you as questionable - you should jot down a note so you can loop back around to it. However, because of the distributed nature of authors and reviewers its imperative that you try your best to answer your own questions as part of your review. Do not say Does this blow up if it gets called when xyz - rather try and find a test that specifically covers that condition and mention it in the comment so others can find it more quickly. Or if you can find no such test, add one to demonstrate the failure, and include a diff in a comment. Hopefully you can say I thought this would blow up, so I wrote this test, but it seems fine. But if your initial reaction is I dont understand this or How does this even work? you should notate it and explain whatever you were able to figure out in order to help subsequent reviewers more quickly identify and grok the subtle or complex issues. Because you will be leaving lots of comments - many of which are potentially not highlighting anything specific - it is VERY important to leave a good summary. Your summary should include details of how you reviewed the change. You may include what you liked most, or least. If you are leaving a negative score ideally you should provide clear instructions on how the change could be modified such that it would be suitable for merge - again diffs work best. Scoring is subjective. Try to realize youre making a judgment call. A positive score means you believe Swift would be undeniably better off with this code merged than it would be going one more second without this change running in production immediately. It is indeed high praise - you should be sure. A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks. It is a surprisingly difficult chore to be confident about the value of unproven code or a not well understood use-case in an uncertain world, and unfortunately all too easy with a thorough review to uncover our defects, and be reminded of the risk of" }, { "data": "Reviewers must try very hard first and foremost to keep master stable. If you can demonstrate a change has an incorrect behavior its almost without exception that the change must be revised to fix the defect before merging rather than letting it in and having to also file a bug. Every commit must be deployable to production. Beyond that - almost any change might be merge-able depending on its merits! Here are some tips you might be able to use to find more changes that should merge! Fixing bugs is HUGELY valuable - the only thing which has a higher cost than the value of fixing a bug - is adding a new bug - if its broken and this change makes it fixed (without breaking anything else) you have a winner! Features are INCREDIBLY difficult to justify their value against the cost of increased complexity, lowered maintainability, risk of regression, or new defects. Try to focus on what is impossible without the feature - when you make the impossible possible, things are better. Make things better. Purely test/doc changes, complex refactoring, or mechanical cleanups are quite nuanced because theres less concrete objective value. Ive seen lots of these kind of changes get lost to the backlog. Ive also seen some success where multiple authors have collaborated to push-over a change rather than provide a review ultimately resulting in a quorum of three or more authors who all agree there is a lot of value in the change - however subjective. Because the bar is high - most reviews will end with a negative score. However, for non-material grievances (nits) - you should feel confident in a positive review if the change is otherwise complete correct and undeniably makes Swift better (not perfect, better). If you see something worth fixing you should point it out in review comments, but when applying a score consider if it need be fixed before the change is suitable to merge vs. fixing it in a follow up change? Consider if the change makes Swift so undeniably better and it was deployed in production without making any additional changes would it still be correct and complete? Would releasing the change to production without any additional follow up make it more difficult to maintain and continue to improve Swift? Endeavor to leave a positive or negative score on every change you review. Use your best judgment. Swift Core maintainers may provide positive reviews scores that look different from your reviews - a +2 instead of a +1. But its exactly the same as your +1. It means the change has been thoroughly and positively reviewed. The only reason its different is to help identify changes which have received multiple competent and positive reviews. If you consistently provide competent reviews you run a VERY high risk of being approached to have your future positive review scores changed from a +1 to +2 in order to make it easier to identify changes which need to get merged. Ideally a review from a core maintainer should provide a clear path forward for the patch author. If you dont know how to proceed respond to the reviewers comments on the change and ask for help. Wed love to try and help. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#commit-messages.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Put simply, if you improve Swift, youre a contributor. The easiest way to improve the project is to tell us where theres a bug. In other words, filing a bug is a valuable and helpful way to contribute to the project. Once a bug has been filed, someone will work on writing a patch to fix the bug. Perhaps youd like to fix a bug. Writing code to fix a bug or add new functionality is tremendously important. Once code has been written, it is submitted upstream for review. All code, even that written by the most senior members of the community, must pass code review and all tests before it can be included in the project. Reviewing proposed patches is a very helpful way to be a contributor. Swift is nothing without the community behind it. Wed love to welcome you to our community. Come find us in #openstack-swift on OFTC IRC or on the OpenStack dev mailing list. For general information on contributing to OpenStack, please check out the contributor guide to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. If you want more Swift related project documentation make sure you checkout the Swift developer (contributor) documentation at https://docs.openstack.org/swift/latest/ Filing a bug is the easiest way to contribute. We use Launchpad as a bug tracker; you can find currently-tracked bugs at https://bugs.launchpad.net/swift. Use the Report a bug link to file a new bug. If you find something in Swift that doesnt match the documentation or doesnt meet your expectations with how it should work, please let us know. Of course, if you ever get an error (like a Traceback message in the logs), we definitely want to know about that. Well do our best to diagnose any problem and patch it as soon as possible. A bug report, at minimum, should describe what you were doing that caused the bug. Swift broke, pls fix is not helpful. Instead, something like When I restarted syslog, Swift started logging traceback messages is very helpful. The goal is that we can reproduce the bug and isolate the issue in order to apply a fix. If you dont have full details, thats ok. Anything you can provide is helpful. You may have noticed that there are many tracked bugs, but not all of them have been confirmed. If you take a look at an old bug report and you can reproduce the issue described, please leave a comment on the bug about that. It lets us all know that the bug is very likely to be valid. All code reviews in OpenStack projects are done on https://review.opendev.org/. Reviewing patches is one of the most effective ways you can contribute to the community. Weve written REVIEW_GUIDELINES.rst (found in this source tree) to help you give good reviews. https://wiki.openstack.org/wiki/Swift/PriorityReviews is a starting point to find what reviews are priority in the" }, { "data": "If youre looking for a way to write and contribute code, but youre not sure what to work on, check out the wishlist bugs in the bug tracker. These are normally smaller items that someone took the time to write down but didnt have time to implement. And please join #openstack-swift on OFTC IRC to tell us what youre working on. https://docs.openstack.org/swift/latest/firstcontributionswift.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at http://docs.openstack.org/infra/manual/developers.html#development-workflow. Gerrit is the review system used in the OpenStack projects. Were sorry, but we wont be able to respond to pull requests submitted through GitHub. Bugs should be filed on Launchpad, not in GitHubs issue tracker. The Zen of Python Simple Scales Minimal dependencies Re-use existing tools and libraries when reasonable Leverage the economies of scale Small, loosely coupled RESTful services No single points of failure Start with the use case then design from the cluster operator up If you havent argued about it, you dont have the right answer yet :) If it is your first implementation, you probably arent done yet :) Please dont feel offended by difference of opinion. Be prepared to advocate for your change and iterate on it based on feedback. Reach out to other people working on the project on IRC or the mailing list - we want to help. Set up a Swift All-In-One VM(SAIO). Make your changes. Docs and tests for your patch must land before or with your patch. Run unit tests, functional tests, probe tests ./.unittests ./.functests ./.probetests Run tox (no command-line args needed) git review Running the tests above against Swift in your development environment (ie your SAIO) will catch most issues. Any patch you propose is expected to be both tested and documented and all tests should pass. If you want to run just a subset of the tests while you are developing, you can use pytest: ``` cd test/unit/common/middleware/ && pytest test_healthcheck.py ``` To check which parts of your code are being exercised by a test, you can run tox and then point your browser to swift/cover/index.html: ``` tox -e py27 -- test.unit.common.middleware.testhealthcheck:TestHealthCheck.testhealthcheck ``` Swifts unit tests are designed to test small parts of the code in isolation. The functional tests validate that the entire system is working from an external perspective (they are black-box tests). You can even run functional tests against public Swift endpoints. The probetests are designed to test much of Swifts internal processes. For example, a test may write data, intentionally corrupt it, and then ensure that the correct processes detect and repair it. When your patch is submitted for code review, it will automatically be tested on the OpenStack CI infrastructure. In addition to many of the tests above, it will also be tested by several other OpenStack test jobs. Once your patch has been reviewed and approved by core reviewers and has passed all automated tests, it will be merged into the Swift source tree." }, { "data": "If youre working on something, its a very good idea to write down what youre thinking about. This lets others get up to speed, helps you collaborate, and serves as a great record for future reference. Write down your thoughts somewhere and put a link to it here. It doesnt matter what form your thoughts are in; use whatever is best for you. Your document should include why your idea is needed and your thoughts on particular design choices and tradeoffs. Please include some contact information (ideally, your IRC nick) so that people can collaborate with you. People working on the Swift project may be found in the in their timezone. The channel is logged, so if you ask a question when no one is around, you can check the log to see if its been answered: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/ This is a Swift team meeting. The discussion in this meeting is about all things related to the Swift project: time: http://eavesdrop.openstack.org/#SwiftTeamMeeting agenda: https://wiki.openstack.org/wiki/Meetings/Swift We use the openstack-discuss@lists.openstack.org mailing list for asynchronous discussions or to communicate with other OpenStack teams. Use the prefix [swift] in your subject line (its a high-volume list, so most people use email filters). More information about the mailing list, including how to subscribe and read the archives, can be found at: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss The swift-core team is an active group of contributors who are responsible for directing and maintaining the Swift project. As a new contributor, your interaction with this group will be mostly through code reviews, because only members of swift-core can approve a code change to be merged into the code repository. But the swift-core team also spend time on IRC so feel free to drop in to ask questions or just to meet us. Note Although your contribution will require reviews by members of swift-core, these arent the only people whose reviews matter. Anyone with a gerrit account can post reviews, so you can ask other developers you know to review your code and you can review theirs. (A good way to learn your way around the codebase is to review other peoples patches.) If youre thinking, Im new at this, how can I possibly provide a helpful review?, take a look at How to Review Changes the OpenStack Way. Or for more specifically in a Swift context read Review Guidelines You can learn more about the role of core reviewers in the OpenStack governance documentation: https://docs.openstack.org/contributors/common/governance.html#core-reviewer The membership list of swift-core is maintained in gerrit: https://review.opendev.org/#/admin/groups/24,members You can also find the members of the swift-core team at the Swift weekly meetings. Understanding how reviewers review and what they look for will help getting your code merged. See Swift Review Guidelines for how we review code. Keep in mind that reviewers are also human; if something feels stalled, then come and poke us on IRC or add it to our meeting agenda. All common PTL duties are enumerated in the PTL guide. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#leave-comments.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Effective code review is a skill like any other professional skill you develop with experience. Effective code review requires trust. No one is perfect. Everyone makes mistakes. Trust builds over time. This document will enumerate behaviors commonly observed and associated with competent reviews of changes purposed to the Swift code base. No one is expected to follow these steps. Guidelines are not rules, not all behaviors will be relevant in all situations. Code review is collaboration, not judgement. Alistair Coles You will need to have a copy of the change in an environment where you can freely edit and experiment with the code in order to provide a non-superficial review. Superficial reviews are not terribly helpful. Always try to be helpful. ;) Check out the change so that you may begin. Commonly, git review -d <change-id> Imagine that you submit a patch to Swift, and a reviewer starts to take a look at it. Your commit message on the patch claims that it fixes a bug or adds a feature, but as soon as the reviewer downloads it locally and tries to test it, a severe and obvious error shows up. Something like a syntax error or a missing dependency. Did you even run this? is the review comment all contributors dread. Reviewers in particular need to be fearful merging changes that just dont work - or at least fail in frequently common enough scenarios to be considered horribly broken. A comment in our review that says roughly I ran this on my machine and observed description of behavior change is supposed to achieve is the most powerful defense we have against the terrible scorn from our fellow Swift developers and operators when we accidentally merge bad code. If youre doing a fair amount of reviews - you will participate in merging a change that will break my clusters - its cool - Ill do it to you at some point too (sorry about that). But when either of us go look at the reviews to understand the process gap that allowed this to happen - it better not be just because we were too lazy to check it out and run it before it got merged. Or be warned, you may receive, the dreaded Did you even run this? Im sorry, I know its rough. ;) Saying that should rarely happen is the same as saying that will happen Douglas Crockford Scale is an amazingly abusive partner. If you contribute changes to Swift your code is running - in production - at scale - and your bugs cannot hide. I wish on all of us that our bugs may be exceptionally rare - meaning they only happen in extremely unlikely edge cases. For example, bad things that happen only 1 out of every 10K times an op is performed will be discovered in minutes. Bad things that happen only 1 out of every one billion times something happens will be observed - by multiple deployments - over the course of a release. Bad things that happen 1/100 times some op is performed are considered horribly broken. Tests must exhaustively exercise possible scenarios. Every system call and network connection will raise an error and timeout - where will that Exception be caught? Yes, I know Gerrit does this already. You can do it" }, { "data": "You might not need to re-run all the tests on your machine - it depends on the change. But, if youre not sure which will be most useful - running all of them best - unit - functional - probe. If you cant reliably get all tests passing in your development environment you will not be able to do effective reviews. Whatever tests/suites you are able to exercise/validate on your machine against your config you should mention in your review comments so that other reviewers might choose to do other testing locally when they have the change checked out. e.g. I went ahead and ran probe/testobjectmetadata_replication.py on my machine with both syncmethod = rsync and syncmethod = ssync - that works for me - but I didnt try it with objectpostas_copy = false Style is an important component to review. The goal is maintainability. However, keep in mind that generally style, readability and maintainability are orthogonal to the suitability of a change for merge. A critical bug fix may be a well written pythonic masterpiece of style - or it may be a hack-y ugly mess that will absolutely need to be cleaned up at some point - but it absolutely should merge because: CRITICAL. BUG. FIX. You should comment inline to praise code that is obvious. You should comment inline to highlight code that you found to be obfuscated. Unfortunately readability is often subjective. We should remember that its probably just our own personal preference. Rather than a comment that says You should use a list comprehension here - rewrite the code as a list comprehension, run the specific tests that hit the relevant section to validate your code is correct, then leave a comment that says: I find this more readable: diff with working tested code If the author (or another reviewer) agrees - its possible the change will get updated to include that improvement before it is merged; or it may happen in a follow-up change. However, remember that style is non-material - it is useful to provide (via diff) suggestions to improve maintainability as part of your review - but if the suggestion is functionally equivalent - it is by definition optional. Read the commit message thoroughly before you begin the review. Commit messages must answer the why and the what for - more so than the how or what it does. Commonly this will take the form of a short description: What is broken - without this change What is impossible to do with Swift - without this change What is slower/worse/harder - without this change If youre not able to discern why a change is being made or how it would be used - you may have to ask for more details before you can successfully review it. Commit messages need to have a high consistent quality. While many things under source control can be fixed and improved in a follow-up change - commit messages are forever. Luckily its easy to fix minor mistakes using the in-line edit feature in Gerrit! If you can avoid ever having to ask someone to change a commit message you will find yourself an amazingly happier and more productive reviewer. Also commit messages should follow the OpenStack Commit Message guidelines, including references to relevant impact tags or bug numbers. You should hand out links to the OpenStack Commit Message guidelines liberally via comments when fixing commit messages during review. Here you go: GitCommitMessages New tests should be added for all code" }, { "data": "Historically you should expect good changes to have a diff line count ratio of at least 2:1 tests to code. Even if a change has to fix a lot of existing tests, if a change does not include any new tests it probably should not merge. If a change includes a good ratio of test changes and adds new tests - you should say so in your review comments. If it does not - you should write some! and offer them to the patch author as a diff indicating to them that something like these tests Im providing as an example will need to be included in this change before it is suitable to merge. Bonus points if you include suggestions for the author as to how they might improve or expand upon the tests stubs you provide. Be very careful about asking an author to add a test for a small change before attempting to do so yourself. Its quite possible there is a lack of existing test infrastructure needed to develop a concise and clear test - the author of a small change may not be the best person to introduce a large amount of new test infrastructure. Also, most of the time remember its harder to write the test than the change - if the author is unable to develop a test for their change on their own you may prevent a useful change from being merged. At a minimum you should suggest a specific unit test that you think they should be able to copy and modify to exercise the behavior in their change. If youre not sure if such a test exists - replace their change with an Exception and run tests until you find one that blows up. Most changes should include documentation. New functions and code should have Docstrings. Tests should obviate new or changed behaviors with descriptive and meaningful phrases. New features should include changes to the documentation tree. New config options should be documented in example configs. The commit message should document the change for the change log. Always point out typos or grammar mistakes when you see them in review, but also consider that if you were able to recognize the intent of the statement - documentation with typos may be easier to iterate and improve on than nothing. If a change does not have adequate documentation it may not be suitable to merge. If a change includes incorrect or misleading documentation or is contrary to existing documentation is probably is not suitable to merge. Every change could have better documentation. Like with tests, a patch isnt done until it has docs. Any patch that adds a new feature, changes behavior, updates configs, or in any other way is different than previous behavior requires docs. manpages, sample configs, docstrings, descriptive prose in the source tree, etc. Reviews have been shown to provide many benefits - one of which is shared ownership. After providing a positive review you should understand how the change works. Doing this will probably require you to play with the change. You might functionally test the change in various scenarios. You may need to write a new unit test to validate the change will degrade gracefully under failure. You might have to write a script to exercise the change under some superficial load. You might have to break the change and validate the new tests fail and provide useful" }, { "data": "You might have to step through some critical section of the code in a debugger to understand when all the possible branches are exercised in tests. When youre done with your review an artifact of your effort will be observable in the piles of code and scripts and diffs you wrote while reviewing. You should make sure to capture those artifacts in a paste or gist and include them in your review comments so that others may reference them. e.g. When I broke the change like this: diff it blew up like this: unit test failure Its not uncommon that a review takes more time than writing a change - hopefully the author also spent as much time as you did validating their change but thats not really in your control. When you provide a positive review you should be sure you understand the change - even seemingly trivial changes will take time to consider the ramifications. Leave. Lots. Of. Comments. A popular web comic has stated that WTFs/Minute is the only valid measurement of code quality. If something initially strikes you as questionable - you should jot down a note so you can loop back around to it. However, because of the distributed nature of authors and reviewers its imperative that you try your best to answer your own questions as part of your review. Do not say Does this blow up if it gets called when xyz - rather try and find a test that specifically covers that condition and mention it in the comment so others can find it more quickly. Or if you can find no such test, add one to demonstrate the failure, and include a diff in a comment. Hopefully you can say I thought this would blow up, so I wrote this test, but it seems fine. But if your initial reaction is I dont understand this or How does this even work? you should notate it and explain whatever you were able to figure out in order to help subsequent reviewers more quickly identify and grok the subtle or complex issues. Because you will be leaving lots of comments - many of which are potentially not highlighting anything specific - it is VERY important to leave a good summary. Your summary should include details of how you reviewed the change. You may include what you liked most, or least. If you are leaving a negative score ideally you should provide clear instructions on how the change could be modified such that it would be suitable for merge - again diffs work best. Scoring is subjective. Try to realize youre making a judgment call. A positive score means you believe Swift would be undeniably better off with this code merged than it would be going one more second without this change running in production immediately. It is indeed high praise - you should be sure. A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks. It is a surprisingly difficult chore to be confident about the value of unproven code or a not well understood use-case in an uncertain world, and unfortunately all too easy with a thorough review to uncover our defects, and be reminded of the risk of" }, { "data": "Reviewers must try very hard first and foremost to keep master stable. If you can demonstrate a change has an incorrect behavior its almost without exception that the change must be revised to fix the defect before merging rather than letting it in and having to also file a bug. Every commit must be deployable to production. Beyond that - almost any change might be merge-able depending on its merits! Here are some tips you might be able to use to find more changes that should merge! Fixing bugs is HUGELY valuable - the only thing which has a higher cost than the value of fixing a bug - is adding a new bug - if its broken and this change makes it fixed (without breaking anything else) you have a winner! Features are INCREDIBLY difficult to justify their value against the cost of increased complexity, lowered maintainability, risk of regression, or new defects. Try to focus on what is impossible without the feature - when you make the impossible possible, things are better. Make things better. Purely test/doc changes, complex refactoring, or mechanical cleanups are quite nuanced because theres less concrete objective value. Ive seen lots of these kind of changes get lost to the backlog. Ive also seen some success where multiple authors have collaborated to push-over a change rather than provide a review ultimately resulting in a quorum of three or more authors who all agree there is a lot of value in the change - however subjective. Because the bar is high - most reviews will end with a negative score. However, for non-material grievances (nits) - you should feel confident in a positive review if the change is otherwise complete correct and undeniably makes Swift better (not perfect, better). If you see something worth fixing you should point it out in review comments, but when applying a score consider if it need be fixed before the change is suitable to merge vs. fixing it in a follow up change? Consider if the change makes Swift so undeniably better and it was deployed in production without making any additional changes would it still be correct and complete? Would releasing the change to production without any additional follow up make it more difficult to maintain and continue to improve Swift? Endeavor to leave a positive or negative score on every change you review. Use your best judgment. Swift Core maintainers may provide positive reviews scores that look different from your reviews - a +2 instead of a +1. But its exactly the same as your +1. It means the change has been thoroughly and positively reviewed. The only reason its different is to help identify changes which have received multiple competent and positive reviews. If you consistently provide competent reviews you run a VERY high risk of being approached to have your future positive review scores changed from a +1 to +2 in order to make it easier to identify changes which need to get merged. Ideally a review from a core maintainer should provide a clear path forward for the patch author. If you dont know how to proceed respond to the reviewers comments on the change and ask for help. Wed love to try and help. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "pseudo-hierarchical-folders-directories.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "You can set quotas on the size and number of objects stored in a container by setting the following metadata: X-Container-Meta-Quota-Bytes. The size, in bytes, of objects that can be stored in a container. X-Container-Meta-Quota-Count. The number of objects that can be stored in a container. When you exceed a container quota, subsequent requests to create objects fail with a 413 Request Entity Too Large error. The Object Storage system uses an eventual consistency model. When you create a new object, the container size and object count might not be immediately updated. Consequently, you might be allowed to create objects even though you have actually exceeded the quota. At some later time, the system updates the container size and object count to the actual values. At this time, subsequent requests fails. In addition, if you are currently under the X-Container-Meta-Quota-Bytes limit and a request uses chunked transfer encoding, the system cannot know if the request will exceed the quota so the system allows the request. However, once the quota is exceeded, any subsequent uploads that use chunked transfer encoding fail. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#consider-edge-cases-very-seriously.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Put simply, if you improve Swift, youre a contributor. The easiest way to improve the project is to tell us where theres a bug. In other words, filing a bug is a valuable and helpful way to contribute to the project. Once a bug has been filed, someone will work on writing a patch to fix the bug. Perhaps youd like to fix a bug. Writing code to fix a bug or add new functionality is tremendously important. Once code has been written, it is submitted upstream for review. All code, even that written by the most senior members of the community, must pass code review and all tests before it can be included in the project. Reviewing proposed patches is a very helpful way to be a contributor. Swift is nothing without the community behind it. Wed love to welcome you to our community. Come find us in #openstack-swift on OFTC IRC or on the OpenStack dev mailing list. For general information on contributing to OpenStack, please check out the contributor guide to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. If you want more Swift related project documentation make sure you checkout the Swift developer (contributor) documentation at https://docs.openstack.org/swift/latest/ Filing a bug is the easiest way to contribute. We use Launchpad as a bug tracker; you can find currently-tracked bugs at https://bugs.launchpad.net/swift. Use the Report a bug link to file a new bug. If you find something in Swift that doesnt match the documentation or doesnt meet your expectations with how it should work, please let us know. Of course, if you ever get an error (like a Traceback message in the logs), we definitely want to know about that. Well do our best to diagnose any problem and patch it as soon as possible. A bug report, at minimum, should describe what you were doing that caused the bug. Swift broke, pls fix is not helpful. Instead, something like When I restarted syslog, Swift started logging traceback messages is very helpful. The goal is that we can reproduce the bug and isolate the issue in order to apply a fix. If you dont have full details, thats ok. Anything you can provide is helpful. You may have noticed that there are many tracked bugs, but not all of them have been confirmed. If you take a look at an old bug report and you can reproduce the issue described, please leave a comment on the bug about that. It lets us all know that the bug is very likely to be valid. All code reviews in OpenStack projects are done on https://review.opendev.org/. Reviewing patches is one of the most effective ways you can contribute to the community. Weve written REVIEW_GUIDELINES.rst (found in this source tree) to help you give good reviews. https://wiki.openstack.org/wiki/Swift/PriorityReviews is a starting point to find what reviews are priority in the" }, { "data": "If youre looking for a way to write and contribute code, but youre not sure what to work on, check out the wishlist bugs in the bug tracker. These are normally smaller items that someone took the time to write down but didnt have time to implement. And please join #openstack-swift on OFTC IRC to tell us what youre working on. https://docs.openstack.org/swift/latest/firstcontributionswift.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at http://docs.openstack.org/infra/manual/developers.html#development-workflow. Gerrit is the review system used in the OpenStack projects. Were sorry, but we wont be able to respond to pull requests submitted through GitHub. Bugs should be filed on Launchpad, not in GitHubs issue tracker. The Zen of Python Simple Scales Minimal dependencies Re-use existing tools and libraries when reasonable Leverage the economies of scale Small, loosely coupled RESTful services No single points of failure Start with the use case then design from the cluster operator up If you havent argued about it, you dont have the right answer yet :) If it is your first implementation, you probably arent done yet :) Please dont feel offended by difference of opinion. Be prepared to advocate for your change and iterate on it based on feedback. Reach out to other people working on the project on IRC or the mailing list - we want to help. Set up a Swift All-In-One VM(SAIO). Make your changes. Docs and tests for your patch must land before or with your patch. Run unit tests, functional tests, probe tests ./.unittests ./.functests ./.probetests Run tox (no command-line args needed) git review Running the tests above against Swift in your development environment (ie your SAIO) will catch most issues. Any patch you propose is expected to be both tested and documented and all tests should pass. If you want to run just a subset of the tests while you are developing, you can use pytest: ``` cd test/unit/common/middleware/ && pytest test_healthcheck.py ``` To check which parts of your code are being exercised by a test, you can run tox and then point your browser to swift/cover/index.html: ``` tox -e py27 -- test.unit.common.middleware.testhealthcheck:TestHealthCheck.testhealthcheck ``` Swifts unit tests are designed to test small parts of the code in isolation. The functional tests validate that the entire system is working from an external perspective (they are black-box tests). You can even run functional tests against public Swift endpoints. The probetests are designed to test much of Swifts internal processes. For example, a test may write data, intentionally corrupt it, and then ensure that the correct processes detect and repair it. When your patch is submitted for code review, it will automatically be tested on the OpenStack CI infrastructure. In addition to many of the tests above, it will also be tested by several other OpenStack test jobs. Once your patch has been reviewed and approved by core reviewers and has passed all automated tests, it will be merged into the Swift source tree." }, { "data": "If youre working on something, its a very good idea to write down what youre thinking about. This lets others get up to speed, helps you collaborate, and serves as a great record for future reference. Write down your thoughts somewhere and put a link to it here. It doesnt matter what form your thoughts are in; use whatever is best for you. Your document should include why your idea is needed and your thoughts on particular design choices and tradeoffs. Please include some contact information (ideally, your IRC nick) so that people can collaborate with you. People working on the Swift project may be found in the in their timezone. The channel is logged, so if you ask a question when no one is around, you can check the log to see if its been answered: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/ This is a Swift team meeting. The discussion in this meeting is about all things related to the Swift project: time: http://eavesdrop.openstack.org/#SwiftTeamMeeting agenda: https://wiki.openstack.org/wiki/Meetings/Swift We use the openstack-discuss@lists.openstack.org mailing list for asynchronous discussions or to communicate with other OpenStack teams. Use the prefix [swift] in your subject line (its a high-volume list, so most people use email filters). More information about the mailing list, including how to subscribe and read the archives, can be found at: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss The swift-core team is an active group of contributors who are responsible for directing and maintaining the Swift project. As a new contributor, your interaction with this group will be mostly through code reviews, because only members of swift-core can approve a code change to be merged into the code repository. But the swift-core team also spend time on IRC so feel free to drop in to ask questions or just to meet us. Note Although your contribution will require reviews by members of swift-core, these arent the only people whose reviews matter. Anyone with a gerrit account can post reviews, so you can ask other developers you know to review your code and you can review theirs. (A good way to learn your way around the codebase is to review other peoples patches.) If youre thinking, Im new at this, how can I possibly provide a helpful review?, take a look at How to Review Changes the OpenStack Way. Or for more specifically in a Swift context read Review Guidelines You can learn more about the role of core reviewers in the OpenStack governance documentation: https://docs.openstack.org/contributors/common/governance.html#core-reviewer The membership list of swift-core is maintained in gerrit: https://review.opendev.org/#/admin/groups/24,members You can also find the members of the swift-core team at the Swift weekly meetings. Understanding how reviewers review and what they look for will help getting your code merged. See Swift Review Guidelines for how we review code. Keep in mind that reviewers are also human; if something feels stalled, then come and poke us on IRC or add it to our meeting agenda. All common PTL duties are enumerated in the PTL guide. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#maintainable-code-is-obvious.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Put simply, if you improve Swift, youre a contributor. The easiest way to improve the project is to tell us where theres a bug. In other words, filing a bug is a valuable and helpful way to contribute to the project. Once a bug has been filed, someone will work on writing a patch to fix the bug. Perhaps youd like to fix a bug. Writing code to fix a bug or add new functionality is tremendously important. Once code has been written, it is submitted upstream for review. All code, even that written by the most senior members of the community, must pass code review and all tests before it can be included in the project. Reviewing proposed patches is a very helpful way to be a contributor. Swift is nothing without the community behind it. Wed love to welcome you to our community. Come find us in #openstack-swift on OFTC IRC or on the OpenStack dev mailing list. For general information on contributing to OpenStack, please check out the contributor guide to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. If you want more Swift related project documentation make sure you checkout the Swift developer (contributor) documentation at https://docs.openstack.org/swift/latest/ Filing a bug is the easiest way to contribute. We use Launchpad as a bug tracker; you can find currently-tracked bugs at https://bugs.launchpad.net/swift. Use the Report a bug link to file a new bug. If you find something in Swift that doesnt match the documentation or doesnt meet your expectations with how it should work, please let us know. Of course, if you ever get an error (like a Traceback message in the logs), we definitely want to know about that. Well do our best to diagnose any problem and patch it as soon as possible. A bug report, at minimum, should describe what you were doing that caused the bug. Swift broke, pls fix is not helpful. Instead, something like When I restarted syslog, Swift started logging traceback messages is very helpful. The goal is that we can reproduce the bug and isolate the issue in order to apply a fix. If you dont have full details, thats ok. Anything you can provide is helpful. You may have noticed that there are many tracked bugs, but not all of them have been confirmed. If you take a look at an old bug report and you can reproduce the issue described, please leave a comment on the bug about that. It lets us all know that the bug is very likely to be valid. All code reviews in OpenStack projects are done on https://review.opendev.org/. Reviewing patches is one of the most effective ways you can contribute to the community. Weve written REVIEW_GUIDELINES.rst (found in this source tree) to help you give good reviews. https://wiki.openstack.org/wiki/Swift/PriorityReviews is a starting point to find what reviews are priority in the" }, { "data": "If youre looking for a way to write and contribute code, but youre not sure what to work on, check out the wishlist bugs in the bug tracker. These are normally smaller items that someone took the time to write down but didnt have time to implement. And please join #openstack-swift on OFTC IRC to tell us what youre working on. https://docs.openstack.org/swift/latest/firstcontributionswift.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at http://docs.openstack.org/infra/manual/developers.html#development-workflow. Gerrit is the review system used in the OpenStack projects. Were sorry, but we wont be able to respond to pull requests submitted through GitHub. Bugs should be filed on Launchpad, not in GitHubs issue tracker. The Zen of Python Simple Scales Minimal dependencies Re-use existing tools and libraries when reasonable Leverage the economies of scale Small, loosely coupled RESTful services No single points of failure Start with the use case then design from the cluster operator up If you havent argued about it, you dont have the right answer yet :) If it is your first implementation, you probably arent done yet :) Please dont feel offended by difference of opinion. Be prepared to advocate for your change and iterate on it based on feedback. Reach out to other people working on the project on IRC or the mailing list - we want to help. Set up a Swift All-In-One VM(SAIO). Make your changes. Docs and tests for your patch must land before or with your patch. Run unit tests, functional tests, probe tests ./.unittests ./.functests ./.probetests Run tox (no command-line args needed) git review Running the tests above against Swift in your development environment (ie your SAIO) will catch most issues. Any patch you propose is expected to be both tested and documented and all tests should pass. If you want to run just a subset of the tests while you are developing, you can use pytest: ``` cd test/unit/common/middleware/ && pytest test_healthcheck.py ``` To check which parts of your code are being exercised by a test, you can run tox and then point your browser to swift/cover/index.html: ``` tox -e py27 -- test.unit.common.middleware.testhealthcheck:TestHealthCheck.testhealthcheck ``` Swifts unit tests are designed to test small parts of the code in isolation. The functional tests validate that the entire system is working from an external perspective (they are black-box tests). You can even run functional tests against public Swift endpoints. The probetests are designed to test much of Swifts internal processes. For example, a test may write data, intentionally corrupt it, and then ensure that the correct processes detect and repair it. When your patch is submitted for code review, it will automatically be tested on the OpenStack CI infrastructure. In addition to many of the tests above, it will also be tested by several other OpenStack test jobs. Once your patch has been reviewed and approved by core reviewers and has passed all automated tests, it will be merged into the Swift source tree." }, { "data": "If youre working on something, its a very good idea to write down what youre thinking about. This lets others get up to speed, helps you collaborate, and serves as a great record for future reference. Write down your thoughts somewhere and put a link to it here. It doesnt matter what form your thoughts are in; use whatever is best for you. Your document should include why your idea is needed and your thoughts on particular design choices and tradeoffs. Please include some contact information (ideally, your IRC nick) so that people can collaborate with you. People working on the Swift project may be found in the in their timezone. The channel is logged, so if you ask a question when no one is around, you can check the log to see if its been answered: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/ This is a Swift team meeting. The discussion in this meeting is about all things related to the Swift project: time: http://eavesdrop.openstack.org/#SwiftTeamMeeting agenda: https://wiki.openstack.org/wiki/Meetings/Swift We use the openstack-discuss@lists.openstack.org mailing list for asynchronous discussions or to communicate with other OpenStack teams. Use the prefix [swift] in your subject line (its a high-volume list, so most people use email filters). More information about the mailing list, including how to subscribe and read the archives, can be found at: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss The swift-core team is an active group of contributors who are responsible for directing and maintaining the Swift project. As a new contributor, your interaction with this group will be mostly through code reviews, because only members of swift-core can approve a code change to be merged into the code repository. But the swift-core team also spend time on IRC so feel free to drop in to ask questions or just to meet us. Note Although your contribution will require reviews by members of swift-core, these arent the only people whose reviews matter. Anyone with a gerrit account can post reviews, so you can ask other developers you know to review your code and you can review theirs. (A good way to learn your way around the codebase is to review other peoples patches.) If youre thinking, Im new at this, how can I possibly provide a helpful review?, take a look at How to Review Changes the OpenStack Way. Or for more specifically in a Swift context read Review Guidelines You can learn more about the role of core reviewers in the OpenStack governance documentation: https://docs.openstack.org/contributors/common/governance.html#core-reviewer The membership list of swift-core is maintained in gerrit: https://review.opendev.org/#/admin/groups/24,members You can also find the members of the swift-core team at the Swift weekly meetings. Understanding how reviewers review and what they look for will help getting your code merged. See Swift Review Guidelines for how we review code. Keep in mind that reviewers are also human; if something feels stalled, then come and poke us on IRC or add it to our meeting agenda. All common PTL duties are enumerated in the PTL guide. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#new-tests.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Effective code review is a skill like any other professional skill you develop with experience. Effective code review requires trust. No one is perfect. Everyone makes mistakes. Trust builds over time. This document will enumerate behaviors commonly observed and associated with competent reviews of changes purposed to the Swift code base. No one is expected to follow these steps. Guidelines are not rules, not all behaviors will be relevant in all situations. Code review is collaboration, not judgement. Alistair Coles You will need to have a copy of the change in an environment where you can freely edit and experiment with the code in order to provide a non-superficial review. Superficial reviews are not terribly helpful. Always try to be helpful. ;) Check out the change so that you may begin. Commonly, git review -d <change-id> Imagine that you submit a patch to Swift, and a reviewer starts to take a look at it. Your commit message on the patch claims that it fixes a bug or adds a feature, but as soon as the reviewer downloads it locally and tries to test it, a severe and obvious error shows up. Something like a syntax error or a missing dependency. Did you even run this? is the review comment all contributors dread. Reviewers in particular need to be fearful merging changes that just dont work - or at least fail in frequently common enough scenarios to be considered horribly broken. A comment in our review that says roughly I ran this on my machine and observed description of behavior change is supposed to achieve is the most powerful defense we have against the terrible scorn from our fellow Swift developers and operators when we accidentally merge bad code. If youre doing a fair amount of reviews - you will participate in merging a change that will break my clusters - its cool - Ill do it to you at some point too (sorry about that). But when either of us go look at the reviews to understand the process gap that allowed this to happen - it better not be just because we were too lazy to check it out and run it before it got merged. Or be warned, you may receive, the dreaded Did you even run this? Im sorry, I know its rough. ;) Saying that should rarely happen is the same as saying that will happen Douglas Crockford Scale is an amazingly abusive partner. If you contribute changes to Swift your code is running - in production - at scale - and your bugs cannot hide. I wish on all of us that our bugs may be exceptionally rare - meaning they only happen in extremely unlikely edge cases. For example, bad things that happen only 1 out of every 10K times an op is performed will be discovered in minutes. Bad things that happen only 1 out of every one billion times something happens will be observed - by multiple deployments - over the course of a release. Bad things that happen 1/100 times some op is performed are considered horribly broken. Tests must exhaustively exercise possible scenarios. Every system call and network connection will raise an error and timeout - where will that Exception be caught? Yes, I know Gerrit does this already. You can do it" }, { "data": "You might not need to re-run all the tests on your machine - it depends on the change. But, if youre not sure which will be most useful - running all of them best - unit - functional - probe. If you cant reliably get all tests passing in your development environment you will not be able to do effective reviews. Whatever tests/suites you are able to exercise/validate on your machine against your config you should mention in your review comments so that other reviewers might choose to do other testing locally when they have the change checked out. e.g. I went ahead and ran probe/testobjectmetadata_replication.py on my machine with both syncmethod = rsync and syncmethod = ssync - that works for me - but I didnt try it with objectpostas_copy = false Style is an important component to review. The goal is maintainability. However, keep in mind that generally style, readability and maintainability are orthogonal to the suitability of a change for merge. A critical bug fix may be a well written pythonic masterpiece of style - or it may be a hack-y ugly mess that will absolutely need to be cleaned up at some point - but it absolutely should merge because: CRITICAL. BUG. FIX. You should comment inline to praise code that is obvious. You should comment inline to highlight code that you found to be obfuscated. Unfortunately readability is often subjective. We should remember that its probably just our own personal preference. Rather than a comment that says You should use a list comprehension here - rewrite the code as a list comprehension, run the specific tests that hit the relevant section to validate your code is correct, then leave a comment that says: I find this more readable: diff with working tested code If the author (or another reviewer) agrees - its possible the change will get updated to include that improvement before it is merged; or it may happen in a follow-up change. However, remember that style is non-material - it is useful to provide (via diff) suggestions to improve maintainability as part of your review - but if the suggestion is functionally equivalent - it is by definition optional. Read the commit message thoroughly before you begin the review. Commit messages must answer the why and the what for - more so than the how or what it does. Commonly this will take the form of a short description: What is broken - without this change What is impossible to do with Swift - without this change What is slower/worse/harder - without this change If youre not able to discern why a change is being made or how it would be used - you may have to ask for more details before you can successfully review it. Commit messages need to have a high consistent quality. While many things under source control can be fixed and improved in a follow-up change - commit messages are forever. Luckily its easy to fix minor mistakes using the in-line edit feature in Gerrit! If you can avoid ever having to ask someone to change a commit message you will find yourself an amazingly happier and more productive reviewer. Also commit messages should follow the OpenStack Commit Message guidelines, including references to relevant impact tags or bug numbers. You should hand out links to the OpenStack Commit Message guidelines liberally via comments when fixing commit messages during review. Here you go: GitCommitMessages New tests should be added for all code" }, { "data": "Historically you should expect good changes to have a diff line count ratio of at least 2:1 tests to code. Even if a change has to fix a lot of existing tests, if a change does not include any new tests it probably should not merge. If a change includes a good ratio of test changes and adds new tests - you should say so in your review comments. If it does not - you should write some! and offer them to the patch author as a diff indicating to them that something like these tests Im providing as an example will need to be included in this change before it is suitable to merge. Bonus points if you include suggestions for the author as to how they might improve or expand upon the tests stubs you provide. Be very careful about asking an author to add a test for a small change before attempting to do so yourself. Its quite possible there is a lack of existing test infrastructure needed to develop a concise and clear test - the author of a small change may not be the best person to introduce a large amount of new test infrastructure. Also, most of the time remember its harder to write the test than the change - if the author is unable to develop a test for their change on their own you may prevent a useful change from being merged. At a minimum you should suggest a specific unit test that you think they should be able to copy and modify to exercise the behavior in their change. If youre not sure if such a test exists - replace their change with an Exception and run tests until you find one that blows up. Most changes should include documentation. New functions and code should have Docstrings. Tests should obviate new or changed behaviors with descriptive and meaningful phrases. New features should include changes to the documentation tree. New config options should be documented in example configs. The commit message should document the change for the change log. Always point out typos or grammar mistakes when you see them in review, but also consider that if you were able to recognize the intent of the statement - documentation with typos may be easier to iterate and improve on than nothing. If a change does not have adequate documentation it may not be suitable to merge. If a change includes incorrect or misleading documentation or is contrary to existing documentation is probably is not suitable to merge. Every change could have better documentation. Like with tests, a patch isnt done until it has docs. Any patch that adds a new feature, changes behavior, updates configs, or in any other way is different than previous behavior requires docs. manpages, sample configs, docstrings, descriptive prose in the source tree, etc. Reviews have been shown to provide many benefits - one of which is shared ownership. After providing a positive review you should understand how the change works. Doing this will probably require you to play with the change. You might functionally test the change in various scenarios. You may need to write a new unit test to validate the change will degrade gracefully under failure. You might have to write a script to exercise the change under some superficial load. You might have to break the change and validate the new tests fail and provide useful" }, { "data": "You might have to step through some critical section of the code in a debugger to understand when all the possible branches are exercised in tests. When youre done with your review an artifact of your effort will be observable in the piles of code and scripts and diffs you wrote while reviewing. You should make sure to capture those artifacts in a paste or gist and include them in your review comments so that others may reference them. e.g. When I broke the change like this: diff it blew up like this: unit test failure Its not uncommon that a review takes more time than writing a change - hopefully the author also spent as much time as you did validating their change but thats not really in your control. When you provide a positive review you should be sure you understand the change - even seemingly trivial changes will take time to consider the ramifications. Leave. Lots. Of. Comments. A popular web comic has stated that WTFs/Minute is the only valid measurement of code quality. If something initially strikes you as questionable - you should jot down a note so you can loop back around to it. However, because of the distributed nature of authors and reviewers its imperative that you try your best to answer your own questions as part of your review. Do not say Does this blow up if it gets called when xyz - rather try and find a test that specifically covers that condition and mention it in the comment so others can find it more quickly. Or if you can find no such test, add one to demonstrate the failure, and include a diff in a comment. Hopefully you can say I thought this would blow up, so I wrote this test, but it seems fine. But if your initial reaction is I dont understand this or How does this even work? you should notate it and explain whatever you were able to figure out in order to help subsequent reviewers more quickly identify and grok the subtle or complex issues. Because you will be leaving lots of comments - many of which are potentially not highlighting anything specific - it is VERY important to leave a good summary. Your summary should include details of how you reviewed the change. You may include what you liked most, or least. If you are leaving a negative score ideally you should provide clear instructions on how the change could be modified such that it would be suitable for merge - again diffs work best. Scoring is subjective. Try to realize youre making a judgment call. A positive score means you believe Swift would be undeniably better off with this code merged than it would be going one more second without this change running in production immediately. It is indeed high praise - you should be sure. A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks. It is a surprisingly difficult chore to be confident about the value of unproven code or a not well understood use-case in an uncertain world, and unfortunately all too easy with a thorough review to uncover our defects, and be reminded of the risk of" }, { "data": "Reviewers must try very hard first and foremost to keep master stable. If you can demonstrate a change has an incorrect behavior its almost without exception that the change must be revised to fix the defect before merging rather than letting it in and having to also file a bug. Every commit must be deployable to production. Beyond that - almost any change might be merge-able depending on its merits! Here are some tips you might be able to use to find more changes that should merge! Fixing bugs is HUGELY valuable - the only thing which has a higher cost than the value of fixing a bug - is adding a new bug - if its broken and this change makes it fixed (without breaking anything else) you have a winner! Features are INCREDIBLY difficult to justify their value against the cost of increased complexity, lowered maintainability, risk of regression, or new defects. Try to focus on what is impossible without the feature - when you make the impossible possible, things are better. Make things better. Purely test/doc changes, complex refactoring, or mechanical cleanups are quite nuanced because theres less concrete objective value. Ive seen lots of these kind of changes get lost to the backlog. Ive also seen some success where multiple authors have collaborated to push-over a change rather than provide a review ultimately resulting in a quorum of three or more authors who all agree there is a lot of value in the change - however subjective. Because the bar is high - most reviews will end with a negative score. However, for non-material grievances (nits) - you should feel confident in a positive review if the change is otherwise complete correct and undeniably makes Swift better (not perfect, better). If you see something worth fixing you should point it out in review comments, but when applying a score consider if it need be fixed before the change is suitable to merge vs. fixing it in a follow up change? Consider if the change makes Swift so undeniably better and it was deployed in production without making any additional changes would it still be correct and complete? Would releasing the change to production without any additional follow up make it more difficult to maintain and continue to improve Swift? Endeavor to leave a positive or negative score on every change you review. Use your best judgment. Swift Core maintainers may provide positive reviews scores that look different from your reviews - a +2 instead of a +1. But its exactly the same as your +1. It means the change has been thoroughly and positively reviewed. The only reason its different is to help identify changes which have received multiple competent and positive reviews. If you consistently provide competent reviews you run a VERY high risk of being approached to have your future positive review scores changed from a +1 to +2 in order to make it easier to identify changes which need to get merged. Ideally a review from a core maintainer should provide a clear path forward for the patch author. If you dont know how to proceed respond to the reviewers comments on the change and ask for help. Wed love to try and help. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#run-it.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Put simply, if you improve Swift, youre a contributor. The easiest way to improve the project is to tell us where theres a bug. In other words, filing a bug is a valuable and helpful way to contribute to the project. Once a bug has been filed, someone will work on writing a patch to fix the bug. Perhaps youd like to fix a bug. Writing code to fix a bug or add new functionality is tremendously important. Once code has been written, it is submitted upstream for review. All code, even that written by the most senior members of the community, must pass code review and all tests before it can be included in the project. Reviewing proposed patches is a very helpful way to be a contributor. Swift is nothing without the community behind it. Wed love to welcome you to our community. Come find us in #openstack-swift on OFTC IRC or on the OpenStack dev mailing list. For general information on contributing to OpenStack, please check out the contributor guide to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. If you want more Swift related project documentation make sure you checkout the Swift developer (contributor) documentation at https://docs.openstack.org/swift/latest/ Filing a bug is the easiest way to contribute. We use Launchpad as a bug tracker; you can find currently-tracked bugs at https://bugs.launchpad.net/swift. Use the Report a bug link to file a new bug. If you find something in Swift that doesnt match the documentation or doesnt meet your expectations with how it should work, please let us know. Of course, if you ever get an error (like a Traceback message in the logs), we definitely want to know about that. Well do our best to diagnose any problem and patch it as soon as possible. A bug report, at minimum, should describe what you were doing that caused the bug. Swift broke, pls fix is not helpful. Instead, something like When I restarted syslog, Swift started logging traceback messages is very helpful. The goal is that we can reproduce the bug and isolate the issue in order to apply a fix. If you dont have full details, thats ok. Anything you can provide is helpful. You may have noticed that there are many tracked bugs, but not all of them have been confirmed. If you take a look at an old bug report and you can reproduce the issue described, please leave a comment on the bug about that. It lets us all know that the bug is very likely to be valid. All code reviews in OpenStack projects are done on https://review.opendev.org/. Reviewing patches is one of the most effective ways you can contribute to the community. Weve written REVIEW_GUIDELINES.rst (found in this source tree) to help you give good reviews. https://wiki.openstack.org/wiki/Swift/PriorityReviews is a starting point to find what reviews are priority in the" }, { "data": "If youre looking for a way to write and contribute code, but youre not sure what to work on, check out the wishlist bugs in the bug tracker. These are normally smaller items that someone took the time to write down but didnt have time to implement. And please join #openstack-swift on OFTC IRC to tell us what youre working on. https://docs.openstack.org/swift/latest/firstcontributionswift.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at http://docs.openstack.org/infra/manual/developers.html#development-workflow. Gerrit is the review system used in the OpenStack projects. Were sorry, but we wont be able to respond to pull requests submitted through GitHub. Bugs should be filed on Launchpad, not in GitHubs issue tracker. The Zen of Python Simple Scales Minimal dependencies Re-use existing tools and libraries when reasonable Leverage the economies of scale Small, loosely coupled RESTful services No single points of failure Start with the use case then design from the cluster operator up If you havent argued about it, you dont have the right answer yet :) If it is your first implementation, you probably arent done yet :) Please dont feel offended by difference of opinion. Be prepared to advocate for your change and iterate on it based on feedback. Reach out to other people working on the project on IRC or the mailing list - we want to help. Set up a Swift All-In-One VM(SAIO). Make your changes. Docs and tests for your patch must land before or with your patch. Run unit tests, functional tests, probe tests ./.unittests ./.functests ./.probetests Run tox (no command-line args needed) git review Running the tests above against Swift in your development environment (ie your SAIO) will catch most issues. Any patch you propose is expected to be both tested and documented and all tests should pass. If you want to run just a subset of the tests while you are developing, you can use pytest: ``` cd test/unit/common/middleware/ && pytest test_healthcheck.py ``` To check which parts of your code are being exercised by a test, you can run tox and then point your browser to swift/cover/index.html: ``` tox -e py27 -- test.unit.common.middleware.testhealthcheck:TestHealthCheck.testhealthcheck ``` Swifts unit tests are designed to test small parts of the code in isolation. The functional tests validate that the entire system is working from an external perspective (they are black-box tests). You can even run functional tests against public Swift endpoints. The probetests are designed to test much of Swifts internal processes. For example, a test may write data, intentionally corrupt it, and then ensure that the correct processes detect and repair it. When your patch is submitted for code review, it will automatically be tested on the OpenStack CI infrastructure. In addition to many of the tests above, it will also be tested by several other OpenStack test jobs. Once your patch has been reviewed and approved by core reviewers and has passed all automated tests, it will be merged into the Swift source tree." }, { "data": "If youre working on something, its a very good idea to write down what youre thinking about. This lets others get up to speed, helps you collaborate, and serves as a great record for future reference. Write down your thoughts somewhere and put a link to it here. It doesnt matter what form your thoughts are in; use whatever is best for you. Your document should include why your idea is needed and your thoughts on particular design choices and tradeoffs. Please include some contact information (ideally, your IRC nick) so that people can collaborate with you. People working on the Swift project may be found in the in their timezone. The channel is logged, so if you ask a question when no one is around, you can check the log to see if its been answered: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/ This is a Swift team meeting. The discussion in this meeting is about all things related to the Swift project: time: http://eavesdrop.openstack.org/#SwiftTeamMeeting agenda: https://wiki.openstack.org/wiki/Meetings/Swift We use the openstack-discuss@lists.openstack.org mailing list for asynchronous discussions or to communicate with other OpenStack teams. Use the prefix [swift] in your subject line (its a high-volume list, so most people use email filters). More information about the mailing list, including how to subscribe and read the archives, can be found at: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss The swift-core team is an active group of contributors who are responsible for directing and maintaining the Swift project. As a new contributor, your interaction with this group will be mostly through code reviews, because only members of swift-core can approve a code change to be merged into the code repository. But the swift-core team also spend time on IRC so feel free to drop in to ask questions or just to meet us. Note Although your contribution will require reviews by members of swift-core, these arent the only people whose reviews matter. Anyone with a gerrit account can post reviews, so you can ask other developers you know to review your code and you can review theirs. (A good way to learn your way around the codebase is to review other peoples patches.) If youre thinking, Im new at this, how can I possibly provide a helpful review?, take a look at How to Review Changes the OpenStack Way. Or for more specifically in a Swift context read Review Guidelines You can learn more about the role of core reviewers in the OpenStack governance documentation: https://docs.openstack.org/contributors/common/governance.html#core-reviewer The membership list of swift-core is maintained in gerrit: https://review.opendev.org/#/admin/groups/24,members You can also find the members of the swift-core team at the Swift weekly meetings. Understanding how reviewers review and what they look for will help getting your code merged. See Swift Review Guidelines for how we review code. Keep in mind that reviewers are also human; if something feels stalled, then come and poke us on IRC or add it to our meeting agenda. All common PTL duties are enumerated in the PTL guide. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "ring.html#module-swift.common.ring.builder.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply. This documentation is generated by the Sphinx toolkit and lives in the source tree. Additional documentation on Swift and other components of OpenStack can be found on the OpenStack wiki and at http://docs.openstack.org. Note If youre looking for associated projects that enhance or use Swift, please see the Associated Projects page. See Complete Reference for the Object Storage REST API The following provides supporting information for the REST API: The OpenStack End User Guide has additional information on using Swift. See the Manage objects and containers section. Index Module Index Search Page Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#reviewers-write-code.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Effective code review is a skill like any other professional skill you develop with experience. Effective code review requires trust. No one is perfect. Everyone makes mistakes. Trust builds over time. This document will enumerate behaviors commonly observed and associated with competent reviews of changes purposed to the Swift code base. No one is expected to follow these steps. Guidelines are not rules, not all behaviors will be relevant in all situations. Code review is collaboration, not judgement. Alistair Coles You will need to have a copy of the change in an environment where you can freely edit and experiment with the code in order to provide a non-superficial review. Superficial reviews are not terribly helpful. Always try to be helpful. ;) Check out the change so that you may begin. Commonly, git review -d <change-id> Imagine that you submit a patch to Swift, and a reviewer starts to take a look at it. Your commit message on the patch claims that it fixes a bug or adds a feature, but as soon as the reviewer downloads it locally and tries to test it, a severe and obvious error shows up. Something like a syntax error or a missing dependency. Did you even run this? is the review comment all contributors dread. Reviewers in particular need to be fearful merging changes that just dont work - or at least fail in frequently common enough scenarios to be considered horribly broken. A comment in our review that says roughly I ran this on my machine and observed description of behavior change is supposed to achieve is the most powerful defense we have against the terrible scorn from our fellow Swift developers and operators when we accidentally merge bad code. If youre doing a fair amount of reviews - you will participate in merging a change that will break my clusters - its cool - Ill do it to you at some point too (sorry about that). But when either of us go look at the reviews to understand the process gap that allowed this to happen - it better not be just because we were too lazy to check it out and run it before it got merged. Or be warned, you may receive, the dreaded Did you even run this? Im sorry, I know its rough. ;) Saying that should rarely happen is the same as saying that will happen Douglas Crockford Scale is an amazingly abusive partner. If you contribute changes to Swift your code is running - in production - at scale - and your bugs cannot hide. I wish on all of us that our bugs may be exceptionally rare - meaning they only happen in extremely unlikely edge cases. For example, bad things that happen only 1 out of every 10K times an op is performed will be discovered in minutes. Bad things that happen only 1 out of every one billion times something happens will be observed - by multiple deployments - over the course of a release. Bad things that happen 1/100 times some op is performed are considered horribly broken. Tests must exhaustively exercise possible scenarios. Every system call and network connection will raise an error and timeout - where will that Exception be caught? Yes, I know Gerrit does this already. You can do it" }, { "data": "You might not need to re-run all the tests on your machine - it depends on the change. But, if youre not sure which will be most useful - running all of them best - unit - functional - probe. If you cant reliably get all tests passing in your development environment you will not be able to do effective reviews. Whatever tests/suites you are able to exercise/validate on your machine against your config you should mention in your review comments so that other reviewers might choose to do other testing locally when they have the change checked out. e.g. I went ahead and ran probe/testobjectmetadata_replication.py on my machine with both syncmethod = rsync and syncmethod = ssync - that works for me - but I didnt try it with objectpostas_copy = false Style is an important component to review. The goal is maintainability. However, keep in mind that generally style, readability and maintainability are orthogonal to the suitability of a change for merge. A critical bug fix may be a well written pythonic masterpiece of style - or it may be a hack-y ugly mess that will absolutely need to be cleaned up at some point - but it absolutely should merge because: CRITICAL. BUG. FIX. You should comment inline to praise code that is obvious. You should comment inline to highlight code that you found to be obfuscated. Unfortunately readability is often subjective. We should remember that its probably just our own personal preference. Rather than a comment that says You should use a list comprehension here - rewrite the code as a list comprehension, run the specific tests that hit the relevant section to validate your code is correct, then leave a comment that says: I find this more readable: diff with working tested code If the author (or another reviewer) agrees - its possible the change will get updated to include that improvement before it is merged; or it may happen in a follow-up change. However, remember that style is non-material - it is useful to provide (via diff) suggestions to improve maintainability as part of your review - but if the suggestion is functionally equivalent - it is by definition optional. Read the commit message thoroughly before you begin the review. Commit messages must answer the why and the what for - more so than the how or what it does. Commonly this will take the form of a short description: What is broken - without this change What is impossible to do with Swift - without this change What is slower/worse/harder - without this change If youre not able to discern why a change is being made or how it would be used - you may have to ask for more details before you can successfully review it. Commit messages need to have a high consistent quality. While many things under source control can be fixed and improved in a follow-up change - commit messages are forever. Luckily its easy to fix minor mistakes using the in-line edit feature in Gerrit! If you can avoid ever having to ask someone to change a commit message you will find yourself an amazingly happier and more productive reviewer. Also commit messages should follow the OpenStack Commit Message guidelines, including references to relevant impact tags or bug numbers. You should hand out links to the OpenStack Commit Message guidelines liberally via comments when fixing commit messages during review. Here you go: GitCommitMessages New tests should be added for all code" }, { "data": "Historically you should expect good changes to have a diff line count ratio of at least 2:1 tests to code. Even if a change has to fix a lot of existing tests, if a change does not include any new tests it probably should not merge. If a change includes a good ratio of test changes and adds new tests - you should say so in your review comments. If it does not - you should write some! and offer them to the patch author as a diff indicating to them that something like these tests Im providing as an example will need to be included in this change before it is suitable to merge. Bonus points if you include suggestions for the author as to how they might improve or expand upon the tests stubs you provide. Be very careful about asking an author to add a test for a small change before attempting to do so yourself. Its quite possible there is a lack of existing test infrastructure needed to develop a concise and clear test - the author of a small change may not be the best person to introduce a large amount of new test infrastructure. Also, most of the time remember its harder to write the test than the change - if the author is unable to develop a test for their change on their own you may prevent a useful change from being merged. At a minimum you should suggest a specific unit test that you think they should be able to copy and modify to exercise the behavior in their change. If youre not sure if such a test exists - replace their change with an Exception and run tests until you find one that blows up. Most changes should include documentation. New functions and code should have Docstrings. Tests should obviate new or changed behaviors with descriptive and meaningful phrases. New features should include changes to the documentation tree. New config options should be documented in example configs. The commit message should document the change for the change log. Always point out typos or grammar mistakes when you see them in review, but also consider that if you were able to recognize the intent of the statement - documentation with typos may be easier to iterate and improve on than nothing. If a change does not have adequate documentation it may not be suitable to merge. If a change includes incorrect or misleading documentation or is contrary to existing documentation is probably is not suitable to merge. Every change could have better documentation. Like with tests, a patch isnt done until it has docs. Any patch that adds a new feature, changes behavior, updates configs, or in any other way is different than previous behavior requires docs. manpages, sample configs, docstrings, descriptive prose in the source tree, etc. Reviews have been shown to provide many benefits - one of which is shared ownership. After providing a positive review you should understand how the change works. Doing this will probably require you to play with the change. You might functionally test the change in various scenarios. You may need to write a new unit test to validate the change will degrade gracefully under failure. You might have to write a script to exercise the change under some superficial load. You might have to break the change and validate the new tests fail and provide useful" }, { "data": "You might have to step through some critical section of the code in a debugger to understand when all the possible branches are exercised in tests. When youre done with your review an artifact of your effort will be observable in the piles of code and scripts and diffs you wrote while reviewing. You should make sure to capture those artifacts in a paste or gist and include them in your review comments so that others may reference them. e.g. When I broke the change like this: diff it blew up like this: unit test failure Its not uncommon that a review takes more time than writing a change - hopefully the author also spent as much time as you did validating their change but thats not really in your control. When you provide a positive review you should be sure you understand the change - even seemingly trivial changes will take time to consider the ramifications. Leave. Lots. Of. Comments. A popular web comic has stated that WTFs/Minute is the only valid measurement of code quality. If something initially strikes you as questionable - you should jot down a note so you can loop back around to it. However, because of the distributed nature of authors and reviewers its imperative that you try your best to answer your own questions as part of your review. Do not say Does this blow up if it gets called when xyz - rather try and find a test that specifically covers that condition and mention it in the comment so others can find it more quickly. Or if you can find no such test, add one to demonstrate the failure, and include a diff in a comment. Hopefully you can say I thought this would blow up, so I wrote this test, but it seems fine. But if your initial reaction is I dont understand this or How does this even work? you should notate it and explain whatever you were able to figure out in order to help subsequent reviewers more quickly identify and grok the subtle or complex issues. Because you will be leaving lots of comments - many of which are potentially not highlighting anything specific - it is VERY important to leave a good summary. Your summary should include details of how you reviewed the change. You may include what you liked most, or least. If you are leaving a negative score ideally you should provide clear instructions on how the change could be modified such that it would be suitable for merge - again diffs work best. Scoring is subjective. Try to realize youre making a judgment call. A positive score means you believe Swift would be undeniably better off with this code merged than it would be going one more second without this change running in production immediately. It is indeed high praise - you should be sure. A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks. It is a surprisingly difficult chore to be confident about the value of unproven code or a not well understood use-case in an uncertain world, and unfortunately all too easy with a thorough review to uncover our defects, and be reminded of the risk of" }, { "data": "Reviewers must try very hard first and foremost to keep master stable. If you can demonstrate a change has an incorrect behavior its almost without exception that the change must be revised to fix the defect before merging rather than letting it in and having to also file a bug. Every commit must be deployable to production. Beyond that - almost any change might be merge-able depending on its merits! Here are some tips you might be able to use to find more changes that should merge! Fixing bugs is HUGELY valuable - the only thing which has a higher cost than the value of fixing a bug - is adding a new bug - if its broken and this change makes it fixed (without breaking anything else) you have a winner! Features are INCREDIBLY difficult to justify their value against the cost of increased complexity, lowered maintainability, risk of regression, or new defects. Try to focus on what is impossible without the feature - when you make the impossible possible, things are better. Make things better. Purely test/doc changes, complex refactoring, or mechanical cleanups are quite nuanced because theres less concrete objective value. Ive seen lots of these kind of changes get lost to the backlog. Ive also seen some success where multiple authors have collaborated to push-over a change rather than provide a review ultimately resulting in a quorum of three or more authors who all agree there is a lot of value in the change - however subjective. Because the bar is high - most reviews will end with a negative score. However, for non-material grievances (nits) - you should feel confident in a positive review if the change is otherwise complete correct and undeniably makes Swift better (not perfect, better). If you see something worth fixing you should point it out in review comments, but when applying a score consider if it need be fixed before the change is suitable to merge vs. fixing it in a follow up change? Consider if the change makes Swift so undeniably better and it was deployed in production without making any additional changes would it still be correct and complete? Would releasing the change to production without any additional follow up make it more difficult to maintain and continue to improve Swift? Endeavor to leave a positive or negative score on every change you review. Use your best judgment. Swift Core maintainers may provide positive reviews scores that look different from your reviews - a +2 instead of a +1. But its exactly the same as your +1. It means the change has been thoroughly and positively reviewed. The only reason its different is to help identify changes which have received multiple competent and positive reviews. If you consistently provide competent reviews you run a VERY high risk of being approached to have your future positive review scores changed from a +1 to +2 in order to make it easier to identify changes which need to get merged. Ideally a review from a core maintainer should provide a clear path forward for the patch author. If you dont know how to proceed respond to the reviewers comments on the change and ask for help. Wed love to try and help. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Put simply, if you improve Swift, youre a contributor. The easiest way to improve the project is to tell us where theres a bug. In other words, filing a bug is a valuable and helpful way to contribute to the project. Once a bug has been filed, someone will work on writing a patch to fix the bug. Perhaps youd like to fix a bug. Writing code to fix a bug or add new functionality is tremendously important. Once code has been written, it is submitted upstream for review. All code, even that written by the most senior members of the community, must pass code review and all tests before it can be included in the project. Reviewing proposed patches is a very helpful way to be a contributor. Swift is nothing without the community behind it. Wed love to welcome you to our community. Come find us in #openstack-swift on OFTC IRC or on the OpenStack dev mailing list. For general information on contributing to OpenStack, please check out the contributor guide to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. If you want more Swift related project documentation make sure you checkout the Swift developer (contributor) documentation at https://docs.openstack.org/swift/latest/ Filing a bug is the easiest way to contribute. We use Launchpad as a bug tracker; you can find currently-tracked bugs at https://bugs.launchpad.net/swift. Use the Report a bug link to file a new bug. If you find something in Swift that doesnt match the documentation or doesnt meet your expectations with how it should work, please let us know. Of course, if you ever get an error (like a Traceback message in the logs), we definitely want to know about that. Well do our best to diagnose any problem and patch it as soon as possible. A bug report, at minimum, should describe what you were doing that caused the bug. Swift broke, pls fix is not helpful. Instead, something like When I restarted syslog, Swift started logging traceback messages is very helpful. The goal is that we can reproduce the bug and isolate the issue in order to apply a fix. If you dont have full details, thats ok. Anything you can provide is helpful. You may have noticed that there are many tracked bugs, but not all of them have been confirmed. If you take a look at an old bug report and you can reproduce the issue described, please leave a comment on the bug about that. It lets us all know that the bug is very likely to be valid. All code reviews in OpenStack projects are done on https://review.opendev.org/. Reviewing patches is one of the most effective ways you can contribute to the community. Weve written REVIEW_GUIDELINES.rst (found in this source tree) to help you give good reviews. https://wiki.openstack.org/wiki/Swift/PriorityReviews is a starting point to find what reviews are priority in the" }, { "data": "If youre looking for a way to write and contribute code, but youre not sure what to work on, check out the wishlist bugs in the bug tracker. These are normally smaller items that someone took the time to write down but didnt have time to implement. And please join #openstack-swift on OFTC IRC to tell us what youre working on. https://docs.openstack.org/swift/latest/firstcontributionswift.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at http://docs.openstack.org/infra/manual/developers.html#development-workflow. Gerrit is the review system used in the OpenStack projects. Were sorry, but we wont be able to respond to pull requests submitted through GitHub. Bugs should be filed on Launchpad, not in GitHubs issue tracker. The Zen of Python Simple Scales Minimal dependencies Re-use existing tools and libraries when reasonable Leverage the economies of scale Small, loosely coupled RESTful services No single points of failure Start with the use case then design from the cluster operator up If you havent argued about it, you dont have the right answer yet :) If it is your first implementation, you probably arent done yet :) Please dont feel offended by difference of opinion. Be prepared to advocate for your change and iterate on it based on feedback. Reach out to other people working on the project on IRC or the mailing list - we want to help. Set up a Swift All-In-One VM(SAIO). Make your changes. Docs and tests for your patch must land before or with your patch. Run unit tests, functional tests, probe tests ./.unittests ./.functests ./.probetests Run tox (no command-line args needed) git review Running the tests above against Swift in your development environment (ie your SAIO) will catch most issues. Any patch you propose is expected to be both tested and documented and all tests should pass. If you want to run just a subset of the tests while you are developing, you can use pytest: ``` cd test/unit/common/middleware/ && pytest test_healthcheck.py ``` To check which parts of your code are being exercised by a test, you can run tox and then point your browser to swift/cover/index.html: ``` tox -e py27 -- test.unit.common.middleware.testhealthcheck:TestHealthCheck.testhealthcheck ``` Swifts unit tests are designed to test small parts of the code in isolation. The functional tests validate that the entire system is working from an external perspective (they are black-box tests). You can even run functional tests against public Swift endpoints. The probetests are designed to test much of Swifts internal processes. For example, a test may write data, intentionally corrupt it, and then ensure that the correct processes detect and repair it. When your patch is submitted for code review, it will automatically be tested on the OpenStack CI infrastructure. In addition to many of the tests above, it will also be tested by several other OpenStack test jobs. Once your patch has been reviewed and approved by core reviewers and has passed all automated tests, it will be merged into the Swift source tree." }, { "data": "If youre working on something, its a very good idea to write down what youre thinking about. This lets others get up to speed, helps you collaborate, and serves as a great record for future reference. Write down your thoughts somewhere and put a link to it here. It doesnt matter what form your thoughts are in; use whatever is best for you. Your document should include why your idea is needed and your thoughts on particular design choices and tradeoffs. Please include some contact information (ideally, your IRC nick) so that people can collaborate with you. People working on the Swift project may be found in the in their timezone. The channel is logged, so if you ask a question when no one is around, you can check the log to see if its been answered: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/ This is a Swift team meeting. The discussion in this meeting is about all things related to the Swift project: time: http://eavesdrop.openstack.org/#SwiftTeamMeeting agenda: https://wiki.openstack.org/wiki/Meetings/Swift We use the openstack-discuss@lists.openstack.org mailing list for asynchronous discussions or to communicate with other OpenStack teams. Use the prefix [swift] in your subject line (its a high-volume list, so most people use email filters). More information about the mailing list, including how to subscribe and read the archives, can be found at: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss The swift-core team is an active group of contributors who are responsible for directing and maintaining the Swift project. As a new contributor, your interaction with this group will be mostly through code reviews, because only members of swift-core can approve a code change to be merged into the code repository. But the swift-core team also spend time on IRC so feel free to drop in to ask questions or just to meet us. Note Although your contribution will require reviews by members of swift-core, these arent the only people whose reviews matter. Anyone with a gerrit account can post reviews, so you can ask other developers you know to review your code and you can review theirs. (A good way to learn your way around the codebase is to review other peoples patches.) If youre thinking, Im new at this, how can I possibly provide a helpful review?, take a look at How to Review Changes the OpenStack Way. Or for more specifically in a Swift context read Review Guidelines You can learn more about the role of core reviewers in the OpenStack governance documentation: https://docs.openstack.org/contributors/common/governance.html#core-reviewer The membership list of swift-core is maintained in gerrit: https://review.opendev.org/#/admin/groups/24,members You can also find the members of the swift-core team at the Swift weekly meetings. Understanding how reviewers review and what they look for will help getting your code merged. See Swift Review Guidelines for how we review code. Keep in mind that reviewers are also human; if something feels stalled, then come and poke us on IRC or add it to our meeting agenda. All common PTL duties are enumerated in the PTL guide. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "review_guidelines.html#scoring.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Effective code review is a skill like any other professional skill you develop with experience. Effective code review requires trust. No one is perfect. Everyone makes mistakes. Trust builds over time. This document will enumerate behaviors commonly observed and associated with competent reviews of changes purposed to the Swift code base. No one is expected to follow these steps. Guidelines are not rules, not all behaviors will be relevant in all situations. Code review is collaboration, not judgement. Alistair Coles You will need to have a copy of the change in an environment where you can freely edit and experiment with the code in order to provide a non-superficial review. Superficial reviews are not terribly helpful. Always try to be helpful. ;) Check out the change so that you may begin. Commonly, git review -d <change-id> Imagine that you submit a patch to Swift, and a reviewer starts to take a look at it. Your commit message on the patch claims that it fixes a bug or adds a feature, but as soon as the reviewer downloads it locally and tries to test it, a severe and obvious error shows up. Something like a syntax error or a missing dependency. Did you even run this? is the review comment all contributors dread. Reviewers in particular need to be fearful merging changes that just dont work - or at least fail in frequently common enough scenarios to be considered horribly broken. A comment in our review that says roughly I ran this on my machine and observed description of behavior change is supposed to achieve is the most powerful defense we have against the terrible scorn from our fellow Swift developers and operators when we accidentally merge bad code. If youre doing a fair amount of reviews - you will participate in merging a change that will break my clusters - its cool - Ill do it to you at some point too (sorry about that). But when either of us go look at the reviews to understand the process gap that allowed this to happen - it better not be just because we were too lazy to check it out and run it before it got merged. Or be warned, you may receive, the dreaded Did you even run this? Im sorry, I know its rough. ;) Saying that should rarely happen is the same as saying that will happen Douglas Crockford Scale is an amazingly abusive partner. If you contribute changes to Swift your code is running - in production - at scale - and your bugs cannot hide. I wish on all of us that our bugs may be exceptionally rare - meaning they only happen in extremely unlikely edge cases. For example, bad things that happen only 1 out of every 10K times an op is performed will be discovered in minutes. Bad things that happen only 1 out of every one billion times something happens will be observed - by multiple deployments - over the course of a release. Bad things that happen 1/100 times some op is performed are considered horribly broken. Tests must exhaustively exercise possible scenarios. Every system call and network connection will raise an error and timeout - where will that Exception be caught? Yes, I know Gerrit does this already. You can do it" }, { "data": "You might not need to re-run all the tests on your machine - it depends on the change. But, if youre not sure which will be most useful - running all of them best - unit - functional - probe. If you cant reliably get all tests passing in your development environment you will not be able to do effective reviews. Whatever tests/suites you are able to exercise/validate on your machine against your config you should mention in your review comments so that other reviewers might choose to do other testing locally when they have the change checked out. e.g. I went ahead and ran probe/testobjectmetadata_replication.py on my machine with both syncmethod = rsync and syncmethod = ssync - that works for me - but I didnt try it with objectpostas_copy = false Style is an important component to review. The goal is maintainability. However, keep in mind that generally style, readability and maintainability are orthogonal to the suitability of a change for merge. A critical bug fix may be a well written pythonic masterpiece of style - or it may be a hack-y ugly mess that will absolutely need to be cleaned up at some point - but it absolutely should merge because: CRITICAL. BUG. FIX. You should comment inline to praise code that is obvious. You should comment inline to highlight code that you found to be obfuscated. Unfortunately readability is often subjective. We should remember that its probably just our own personal preference. Rather than a comment that says You should use a list comprehension here - rewrite the code as a list comprehension, run the specific tests that hit the relevant section to validate your code is correct, then leave a comment that says: I find this more readable: diff with working tested code If the author (or another reviewer) agrees - its possible the change will get updated to include that improvement before it is merged; or it may happen in a follow-up change. However, remember that style is non-material - it is useful to provide (via diff) suggestions to improve maintainability as part of your review - but if the suggestion is functionally equivalent - it is by definition optional. Read the commit message thoroughly before you begin the review. Commit messages must answer the why and the what for - more so than the how or what it does. Commonly this will take the form of a short description: What is broken - without this change What is impossible to do with Swift - without this change What is slower/worse/harder - without this change If youre not able to discern why a change is being made or how it would be used - you may have to ask for more details before you can successfully review it. Commit messages need to have a high consistent quality. While many things under source control can be fixed and improved in a follow-up change - commit messages are forever. Luckily its easy to fix minor mistakes using the in-line edit feature in Gerrit! If you can avoid ever having to ask someone to change a commit message you will find yourself an amazingly happier and more productive reviewer. Also commit messages should follow the OpenStack Commit Message guidelines, including references to relevant impact tags or bug numbers. You should hand out links to the OpenStack Commit Message guidelines liberally via comments when fixing commit messages during review. Here you go: GitCommitMessages New tests should be added for all code" }, { "data": "Historically you should expect good changes to have a diff line count ratio of at least 2:1 tests to code. Even if a change has to fix a lot of existing tests, if a change does not include any new tests it probably should not merge. If a change includes a good ratio of test changes and adds new tests - you should say so in your review comments. If it does not - you should write some! and offer them to the patch author as a diff indicating to them that something like these tests Im providing as an example will need to be included in this change before it is suitable to merge. Bonus points if you include suggestions for the author as to how they might improve or expand upon the tests stubs you provide. Be very careful about asking an author to add a test for a small change before attempting to do so yourself. Its quite possible there is a lack of existing test infrastructure needed to develop a concise and clear test - the author of a small change may not be the best person to introduce a large amount of new test infrastructure. Also, most of the time remember its harder to write the test than the change - if the author is unable to develop a test for their change on their own you may prevent a useful change from being merged. At a minimum you should suggest a specific unit test that you think they should be able to copy and modify to exercise the behavior in their change. If youre not sure if such a test exists - replace their change with an Exception and run tests until you find one that blows up. Most changes should include documentation. New functions and code should have Docstrings. Tests should obviate new or changed behaviors with descriptive and meaningful phrases. New features should include changes to the documentation tree. New config options should be documented in example configs. The commit message should document the change for the change log. Always point out typos or grammar mistakes when you see them in review, but also consider that if you were able to recognize the intent of the statement - documentation with typos may be easier to iterate and improve on than nothing. If a change does not have adequate documentation it may not be suitable to merge. If a change includes incorrect or misleading documentation or is contrary to existing documentation is probably is not suitable to merge. Every change could have better documentation. Like with tests, a patch isnt done until it has docs. Any patch that adds a new feature, changes behavior, updates configs, or in any other way is different than previous behavior requires docs. manpages, sample configs, docstrings, descriptive prose in the source tree, etc. Reviews have been shown to provide many benefits - one of which is shared ownership. After providing a positive review you should understand how the change works. Doing this will probably require you to play with the change. You might functionally test the change in various scenarios. You may need to write a new unit test to validate the change will degrade gracefully under failure. You might have to write a script to exercise the change under some superficial load. You might have to break the change and validate the new tests fail and provide useful" }, { "data": "You might have to step through some critical section of the code in a debugger to understand when all the possible branches are exercised in tests. When youre done with your review an artifact of your effort will be observable in the piles of code and scripts and diffs you wrote while reviewing. You should make sure to capture those artifacts in a paste or gist and include them in your review comments so that others may reference them. e.g. When I broke the change like this: diff it blew up like this: unit test failure Its not uncommon that a review takes more time than writing a change - hopefully the author also spent as much time as you did validating their change but thats not really in your control. When you provide a positive review you should be sure you understand the change - even seemingly trivial changes will take time to consider the ramifications. Leave. Lots. Of. Comments. A popular web comic has stated that WTFs/Minute is the only valid measurement of code quality. If something initially strikes you as questionable - you should jot down a note so you can loop back around to it. However, because of the distributed nature of authors and reviewers its imperative that you try your best to answer your own questions as part of your review. Do not say Does this blow up if it gets called when xyz - rather try and find a test that specifically covers that condition and mention it in the comment so others can find it more quickly. Or if you can find no such test, add one to demonstrate the failure, and include a diff in a comment. Hopefully you can say I thought this would blow up, so I wrote this test, but it seems fine. But if your initial reaction is I dont understand this or How does this even work? you should notate it and explain whatever you were able to figure out in order to help subsequent reviewers more quickly identify and grok the subtle or complex issues. Because you will be leaving lots of comments - many of which are potentially not highlighting anything specific - it is VERY important to leave a good summary. Your summary should include details of how you reviewed the change. You may include what you liked most, or least. If you are leaving a negative score ideally you should provide clear instructions on how the change could be modified such that it would be suitable for merge - again diffs work best. Scoring is subjective. Try to realize youre making a judgment call. A positive score means you believe Swift would be undeniably better off with this code merged than it would be going one more second without this change running in production immediately. It is indeed high praise - you should be sure. A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks. It is a surprisingly difficult chore to be confident about the value of unproven code or a not well understood use-case in an uncertain world, and unfortunately all too easy with a thorough review to uncover our defects, and be reminded of the risk of" }, { "data": "Reviewers must try very hard first and foremost to keep master stable. If you can demonstrate a change has an incorrect behavior its almost without exception that the change must be revised to fix the defect before merging rather than letting it in and having to also file a bug. Every commit must be deployable to production. Beyond that - almost any change might be merge-able depending on its merits! Here are some tips you might be able to use to find more changes that should merge! Fixing bugs is HUGELY valuable - the only thing which has a higher cost than the value of fixing a bug - is adding a new bug - if its broken and this change makes it fixed (without breaking anything else) you have a winner! Features are INCREDIBLY difficult to justify their value against the cost of increased complexity, lowered maintainability, risk of regression, or new defects. Try to focus on what is impossible without the feature - when you make the impossible possible, things are better. Make things better. Purely test/doc changes, complex refactoring, or mechanical cleanups are quite nuanced because theres less concrete objective value. Ive seen lots of these kind of changes get lost to the backlog. Ive also seen some success where multiple authors have collaborated to push-over a change rather than provide a review ultimately resulting in a quorum of three or more authors who all agree there is a lot of value in the change - however subjective. Because the bar is high - most reviews will end with a negative score. However, for non-material grievances (nits) - you should feel confident in a positive review if the change is otherwise complete correct and undeniably makes Swift better (not perfect, better). If you see something worth fixing you should point it out in review comments, but when applying a score consider if it need be fixed before the change is suitable to merge vs. fixing it in a follow up change? Consider if the change makes Swift so undeniably better and it was deployed in production without making any additional changes would it still be correct and complete? Would releasing the change to production without any additional follow up make it more difficult to maintain and continue to improve Swift? Endeavor to leave a positive or negative score on every change you review. Use your best judgment. Swift Core maintainers may provide positive reviews scores that look different from your reviews - a +2 instead of a +1. But its exactly the same as your +1. It means the change has been thoroughly and positively reviewed. The only reason its different is to help identify changes which have received multiple competent and positive reviews. If you consistently provide competent reviews you run a VERY high risk of being approached to have your future positive review scores changed from a +1 to +2 in order to make it easier to identify changes which need to get merged. Ideally a review from a core maintainer should provide a clear path forward for the patch author. If you dont know how to proceed respond to the reviewers comments on the change and ask for help. Wed love to try and help. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "ring.html#ring.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply. This documentation is generated by the Sphinx toolkit and lives in the source tree. Additional documentation on Swift and other components of OpenStack can be found on the OpenStack wiki and at http://docs.openstack.org. Note If youre looking for associated projects that enhance or use Swift, please see the Associated Projects page. See Complete Reference for the Object Storage REST API The following provides supporting information for the REST API: The OpenStack End User Guide has additional information on using Swift. See the Manage objects and containers section. Index Module Index Search Page Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "ring.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "S3 is a product from Amazon, and as such, it includes features that are outside the scope of Swift itself. For example, Swift doesnt have anything to do with billing, whereas S3 buckets can be tied to Amazons billing system. Similarly, log delivery is a service outside of Swift. Its entirely possible for a Swift deployment to provide that functionality, but it is not part of Swift itself. Likewise, a Swift deployment can provide similar geographic availability as S3, but this is tied to the deployers willingness to build the infrastructure and support systems to do" }, { "data": "| S3 REST API method | Category | Swift S3 API | |:--|:--|:| | GET Object | Core-API | Yes | | HEAD Object | Core-API | Yes | | PUT Object | Core-API | Yes | | PUT Object Copy | Core-API | Yes | | DELETE Object | Core-API | Yes | | Initiate Multipart Upload | Core-API | Yes | | Upload Part | Core-API | Yes | | Upload Part Copy | Core-API | Yes | | Complete Multipart Upload | Core-API | Yes | | Abort Multipart Upload | Core-API | Yes | | List Parts | Core-API | Yes | | GET Object ACL | Core-API | Yes | | PUT Object ACL | Core-API | Yes | | PUT Bucket | Core-API | Yes | | GET Bucket List Objects | Core-API | Yes | | HEAD Bucket | Core-API | Yes | | DELETE Bucket | Core-API | Yes | | List Multipart Uploads | Core-API | Yes | | GET Bucket acl | Core-API | Yes | | PUT Bucket acl | Core-API | Yes | | Versioning | Versioning | Yes | | Bucket notification | Notifications | No | | Bucket Lifecycle [1] [2] [3] [4] [5] [6] | Bucket Lifecycle | No | | Bucket policy | Advanced ACLs | No | | Public website [7] [8] [9] [10] | Public Website | No | | Billing [11] [12] | Billing | No | | GET Bucket location | Advanced Feature | Yes | | Delete Multiple Objects | Advanced Feature | Yes | | Object tagging | Advanced Feature | No | | GET Object torrent | Advanced Feature | No | | Bucket inventory | Advanced Feature | No | | GET Bucket service | Advanced Feature | No | | Bucket accelerate | CDN Integration | No | S3 REST API method Category Swift S3 API GET Object Core-API Yes HEAD Object Core-API Yes PUT Object Core-API Yes PUT Object Copy Core-API Yes DELETE Object Core-API Yes Initiate Multipart Upload Core-API Yes Upload Part Core-API Yes Upload Part Copy Core-API Yes Complete Multipart Upload Core-API Yes Abort Multipart Upload Core-API Yes List Parts Core-API Yes GET Object ACL Core-API Yes PUT Object ACL Core-API Yes PUT Bucket Core-API Yes GET Bucket List Objects Core-API Yes HEAD Bucket Core-API Yes DELETE Bucket Core-API Yes List Multipart Uploads Core-API Yes GET Bucket acl Core-API Yes PUT Bucket acl Core-API Yes Versioning Versioning Yes Bucket notification Notifications No Bucket Lifecycle [1] [2] [3] [4] [5] [6] Bucket Lifecycle No Bucket policy Advanced ACLs No Public website [7] [8] [9] [10] Public Website No Billing [11] [12] Billing No GET Bucket location Advanced Feature Yes Delete Multiple Objects Advanced Feature Yes Object tagging Advanced Feature No GET Object torrent Advanced Feature No Bucket inventory Advanced Feature No GET Bucket service Advanced Feature No Bucket accelerate CDN Integration No POST restore Bucket lifecycle Bucket logging Bucket analytics Bucket metrics Bucket replication OPTIONS object Object POST from HTML form Bucket public website Bucket CORS Request payment Bucket tagging Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "ring_background.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "A cross-domain policy file allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. See https://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/xdomain.html for a description of the purpose and structure of the cross-domain policy file. The cross-domain policy file is installed in the root of a web server (i.e., the path is /crossdomain.xml). The crossdomain middleware responds to a path of /crossdomain.xml with an XML document such as: ``` <?xml version=\"1.0\"?> <!DOCTYPE cross-domain-policy SYSTEM \"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd\" > <cross-domain-policy> <allow-access-from domain=\"*\" secure=\"false\" /> </cross-domain-policy> ``` You should use a policy appropriate to your site. The examples and the default policy are provided to indicate how to syntactically construct a cross domain policy file they are not recommendations. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing. ed on http://www.w3.org/TR/cors/#simple-response-header the headers etag, x-timestamp, x-trans-id, x-openstack-request-id all metadata headers (X-Container-Meta-* for containers and X-Object-Meta-* for objects) headers listed in X-Container-Meta-Access-Control-Expose-Headers headers configured using the corsexposeheaders option in proxy-server.conf Note An OPTIONS request to a symlink object will respond with the options for the symlink only, the request will not be redirected to the target" }, { "data": "Therefore, if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. To see some CORS Javascript in action download the test CORS page (source below). Host it on a webserver and take note of the protocol and hostname (origin) youll be using to request the page, e.g. http://localhost. Locate a container youd like to query. Needless to say the Swift cluster hosting this container should have CORS support. Append the origin of the test page to the containers X-Container-Meta-Access-Control-Allow-Origin header,: ``` curl -X POST -H 'X-Auth-Token: xxx' \\ -H 'X-Container-Meta-Access-Control-Allow-Origin: http://localhost' \\ http://192.168.56.3:8080/v1/AUTH_test/cont1 ``` At this point the container is now accessible to CORS clients hosted on http://localhost. Open the test CORS page in your browser. Populate the Token field Populate the URL field with the URL of either a container or object Select the request method Hit Submit Assuming the request succeeds you should see the response header and body. If something went wrong the response status will be 0. A sample cross-site test page is located in the project source tree doc/source/test-cors.html. ``` <!DOCTYPE html> <html> <head> <meta charset=\"utf-8\"> <title>Test CORS</title> </head> <body> Token<br><input id=\"token\" type=\"text\" size=\"64\"><br><br> Method<br> <select id=\"method\"> <option value=\"GET\">GET</option> <option value=\"HEAD\">HEAD</option> <option value=\"POST\">POST</option> <option value=\"DELETE\">DELETE</option> <option value=\"PUT\">PUT</option> </select><br><br> URL (Container or Object)<br><input id=\"url\" size=\"64\" type=\"text\"><br><br> <input id=\"submit\" type=\"button\" value=\"Submit\" onclick=\"submit(); return false;\"> <pre id=\"response_headers\"></pre> <p> <hr> <pre id=\"response_body\"></pre> <script type=\"text/javascript\"> function submit() { var token = document.getElementById('token').value; var method = document.getElementById('method').value; var url = document.getElementById('url').value; document.getElementById('response_headers').textContent = null; document.getElementById('response_body').textContent = null; var request = new XMLHttpRequest(); request.onreadystatechange = function (oEvent) { if (request.readyState == 4) { responseHeaders = 'Status: ' + request.status; responseHeaders = responseHeaders + '\\nStatus Text: ' + request.statusText; responseHeaders = responseHeaders + '\\n\\n' + request.getAllResponseHeaders(); document.getElementById('response_headers').textContent = responseHeaders; document.getElementById('response_body').textContent = request.responseText; } } request.open(method, url); if (token != '') { // custom headers always trigger a pre-flight request request.setRequestHeader('X-Auth-Token', token); } request.send(null); } </script> </body> </html> ``` Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "s3_compat.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "To discover whether your Object Storage system supports this feature, see Discoverability. Alternatively, check with your service provider. You can use your Object Storage account to create a static website. This static website is created with Static Web middleware and serves container data with a specified index file, error file resolution, and optional file listings. This mode is normally active only for anonymous requests, which provide no authentication token. To use it with authenticated requests, set the header X-Web-Mode to TRUE on the request. The Static Web filter must be added to the pipeline in your /etc/swift/proxy-server.conf file below any authentication middleware. You must also add a Static Web middleware configuration section. Your publicly readable containers are checked for two headers, X-Container-Meta-Web-Index and X-Container-Meta-Web-Error. The X-Container-Meta-Web-Error header is discussed below, in the section called Set error pages for static website. Use X-Container-Meta-Web-Index to determine the index file (or default page served, such as index.html) for your website. When someone initially enters your site, the index.html file displays automatically. If you create sub-directories for your site by creating pseudo-directories in your container, the index page for each sub-directory is displayed by default. If your pseudo-directory does not have a file with the same name as your index file, visits to the sub-directory return a 404 error. You also have the option of displaying a list of files in your pseudo-directory instead of a web page. To do this, set the X-Container-Meta-Web-Listings header to TRUE. You may add styles to your file listing by setting X-Container-Meta-Web-Listings-CSS to a style sheet (for example, lists.css). The following sections show how to use Static Web middleware through Object Storage. Make the container publicly readable. Once the container is publicly readable, you can access your objects directly, but you must set the index file to browse the main site URL and its sub-directories. ``` $ swift post -r '.r:*,.rlistings' container ``` Set the index file. In this case, index.html is the default file displayed when the site appears. ``` $ swift post -m 'web-index:index.html' container ``` Turn on file listing. If you do not set the index file, the URL displays a list of the objects in the container. Instructions on styling the list with a CSS follow. ``` $ swift post -m 'web-listings: true' container ``` Style the file listing using a CSS. ``` $ swift post -m 'web-listings-css:listings.css' container ``` You can create and set custom error pages for visitors to your website; currently, only 401 (Unauthorized) and 404 (Not Found) errors are supported. To do this, set the metadata header, X-Container-Meta-Web-Error. Error pages are served with the status code pre-pended to the name of the error page you set. For instance, if you set X-Container-Meta-Web-Error to error.html, 401 errors will display the page 401error.html. Similarly, 404 errors will display 404error.html. You must have both of these pages created in your container when you set the X-Container-Meta-Web-Error metadata, or your site will display generic error pages. You only have to set the X-Container-Meta-Web-Error metadata once for your entire static website. ``` $ swift post -m 'web-error:error.html' container ``` Any 2nn response indicates success. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "ring.html#module-swift.common.ring.composite_builder.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "The internal REST API used between the proxy server and the account, container and object server is almost identical to public Swift REST API, but with a few internal extensions (for example, update an account with a new container). The pluggable back-end APIs for the three REST API servers (account, container, object) abstracts the needs for servicing the various REST APIs from the details of how data is laid out and stored on-disk. The APIs are documented in the reference implementations for all three servers. For historical reasons, the object server backend reference implementation module is named diskfile, while the account and container server backend reference implementation modules are named appropriately. This API is still under development and not yet finalized. Pluggable Back-end for Account Server Encapsulates working with an account database. Create account_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object put_timestamp put timestamp Create container table which is specific to the account DB. conn DB connection object Create policy_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object Check if the account DB is empty. True if the database has no active containers. Get global data for the account. dict with keys: account, createdat, puttimestamp, deletetimestamp, statuschangedat, containercount, objectcount, bytesused, hash, id Get global policy stats for the account. do_migrations boolean, if True the policy stat dicts will always include the container_count key; otherwise it may be omitted on legacy databases until they are migrated. dict of policy stats where the key is the policy index and the value is a dictionary like {object_count: M, bytesused: N, containercount: L} Only returns true if the status field is set to DELETED. Get a list of containers sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix. limit maximum number of entries to get marker marker query end_marker end marker query prefix prefix query delimiter delimiter for query reverse reverse the result order. allow_reserved exclude names with reserved-byte by default list of tuples of (name, objectcount, bytesused, put_timestamp, 0) Turn this db record dict into the format this service uses for pending pickles. Merge items into the container table. itemlist list of dictionaries of {name, puttimestamp, deletetimestamp, objectcount, bytes_used, deleted, storagepolicyindex} source if defined, update incoming_sync with the source Create a container with the given attributes. name name of the container to create (a native string) puttimestamp puttimestamp of the container to create deletetimestamp deletetimestamp of the container to create object_count number of objects in the container bytes_used number of bytes used by the container storagepolicyindex the storage policy for this container Pluggable Back-ends for Container Server Encapsulates working with a container database. Note that this may involve multiple on-disk DB files if the container becomes sharded: dbfile is the path to the legacy container DB name, i.e. <hash>.db. This file should exist for an initialised broker that has never been sharded, but will not exist once a container has been sharded. db_files is a list of existing db files for the broker. This list should have at least one entry for an initialised broker, and should have two entries while a broker is in SHARDING" }, { "data": "db_file is the path to whichever db is currently authoritative for the container. Depending on the containers state, this may not be the same as the dbfile argument given to init_(), unless forcedbfile is True in which case db_file is always equal to the dbfile argument given to init_(). pendingfile is always equal to db_file extended with .pending, i.e. <hash>.db.pending. Create a ContainerBroker instance. If the db doesnt exist, initialize the db file. device_path device path part partition number account account name string container container name string logger a logger instance epoch a timestamp to include in the db filename put_timestamp initial timestamp if broker needs to be initialized storagepolicyindex the storage policy index a tuple of (broker, initialized) where broker is an instance of swift.container.backend.ContainerBroker and initialized is True if the db file was initialized, False otherwise. Create the container_info table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. Also creates the container_stat view. conn DB connection object put_timestamp put timestamp storagepolicyindex storage policy index Create the object table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. conn DB connection object Create policy_stat table. conn DB connection object storagepolicyindex the policy_index the container is being created with Create the shard_range table which is specific to the container DB. conn DB connection object Get the path to the primary db file for this broker. This is typically the db file for the most recent sharding epoch. However, if no db files exist on disk, or if forcedbfile was True when the broker was constructed, then the primary db file is the file passed to the broker constructor. A path to a db file; the file does not necessarily exist. Gets the cached list of valid db files that exist on disk for this broker. reloaddbfiles(). A list of paths to db files ordered by ascending epoch; the list may be empty. Mark an object deleted. name object name to be deleted timestamp timestamp when the object was marked as deleted storagepolicyindex the storage policy index for the object Check if container DB is empty. This method uses more stringent checks on object count than is_deleted(): this method checks that there are no objects in any policy; if the container is in the process of sharding then both fresh and retiring databases are checked to be empty; if a root container has shard ranges then they are checked to be empty. True if the database has no active objects, False otherwise Updates this brokers own shard range with the given epoch, sets its state to SHARDING and persists it in the DB. epoch a Timestamp the brokers updated own shard range. Scans the container db for shard ranges. Scanning will start at the upper bound of the any existing_ranges that are given, otherwise at ShardRange.MIN. Scanning will stop when limit shard ranges have been found or when no more shard ranges can be found. In the latter case, the upper bound of the final shard range will be equal to the upper bound of the container namespace. This method does not modify the state of the db; callers are responsible for persisting any shard range data in the" }, { "data": "shard_size the size of each shard range limit the maximum number of shard points to be found; a negative value (default) implies no limit. existing_ranges an optional list of existing ShardRanges; if given, this list should be sorted in order of upper bounds; the scan for new shard ranges will start at the upper bound of the last existing ShardRange. minimumshardsize Minimum size of the final shard range. If this is greater than one then the final shard range may be extended to more than shard_size in order to avoid a further shard range with less minimumshardsize rows. a tuple; the first value in the tuple is a list of dicts each having keys {index, lower, upper, object_count} in order of ascending upper; the second value in the tuple is a boolean which is True if the last shard range has been found, False otherwise. Returns a list of all shard range data, including own shard range and deleted shard ranges. A list of dict representations of a ShardRange. Return a list of brokers for component dbs. The list has two entries while the db state is sharding: the first entry is a broker for the retiring db with skip_commits set to True; the second entry is a broker for the fresh db with skip_commits set to False. For any other db state the list has one entry. a list of ContainerBroker Returns the current state of on disk db files. Get global data for the container. dict with keys: account, container, created_at, puttimestamp, deletetimestamp, status, statuschangedat, objectcount, bytesused, reportedputtimestamp, reporteddeletetimestamp, reportedobjectcount, reportedbytesused, hash, id, xcontainersync_point1, xcontainersyncpoint2, and storagepolicy_index, db_state. Get the is_deleted status and info for the container. a tuple, in the form (info, is_deleted) info is a dict as returned by getinfo and isdeleted is a boolean. Get a list of objects which are in a storage policy different from the containers storage policy. start last reconciler sync point count maximum number of entries to get list of dicts with keys: name, created_at, size, contenttype, etag, storagepolicy_index Returns a list of persisted namespaces per input parameters. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored. reverse reverse the result order. states if specified, restricts the returned list to namespaces that have one of the given states; should be a list of ints. fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. a list of Namespace objects. Returns a list of objects, including deleted objects, in all policies. Each object in the list is described by a dict with keys {name, createdat, size, contenttype, etag, deleted, storagepolicyindex}. limit maximum number of entries to get marker if set, objects with names less than or equal to this value will not be included in the" }, { "data": "end_marker if set, objects with names greater than or equal to this value will not be included in the list. include_deleted if True, include only deleted objects; if False, include only undeleted objects; otherwise (default), include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. a list of dicts, each describing an object. Returns a shard range representing this brokers own shard range. If no such range has been persisted in the brokers shard ranges table then a default shard range representing the entire namespace will be returned. The objectcount and bytesused of the returned shard range are not guaranteed to be up-to-date with the current object stats for this broker. Callers that require up-to-date stats should use the get_info method. no_default if True and the brokers own shard range is not found in the shard ranges table then None is returned, otherwise a default shard range is returned. an instance of ShardRange Get information about the DB required for replication. dict containing keys from getinfo plus maxrow and metadata count and metadata is the raw string. Returns a list of persisted shard ranges. marker restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If reverse=True then marker is treated as end_marker. marker is ignored if includes is specified. end_marker restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If reverse=True then end_marker is treated as marker. end_marker is ignored if includes is specified. includes restricts the returned list to the shard range that includes the given value; if includes is specified then fillgaps, marker and endmarker are ignored, but other constraints are applied (e.g. exclude_others and include_deleted). reverse reverse the result order. include_deleted include items that have the delete marker set. states if specified, restricts the returned list to shard ranges that have one of the given states; should be a list of ints. include_own boolean that governs whether the row whose name matches the brokers path is included in the returned list. If True, that row is included unless it is excluded by other constraints (e.g. marker, end_marker, includes). If False, that row is not included. Default is False. exclude_others boolean that governs whether the rows whose names do not match the brokers path are included in the returned list. If True, those rows are not included, otherwise they are included. Default is False. fill_gaps if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled. fill_gaps is ignored if includes is specified. a list of instances of swift.common.utils.ShardRange. Get the aggregate object stats for all shard ranges in states ACTIVE, SHARDING or SHRINKING. a dict with keys {bytesused, objectcount} Returns sharding specific info from the brokers metadata. key if given the value stored under key in the sharding info will be returned. either a dict of sharding info or the value stored under key in that dict. Returns sharding specific info from the brokers metadata with timestamps. key if given the value stored under key in the sharding info will be returned. a dict of sharding info with their" }, { "data": "This function tells if there is any shard range other than the brokers own shard range, that is not marked as deleted. A boolean value as described above. Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age. Returns True if this container is a root container, False otherwise. A root container is a container that is not a shard of another container. Get a list of objects sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix. limit maximum number of entries to get marker marker query end_marker end marker query prefix prefix query delimiter delimiter for query path if defined, will set the prefix and delimiter based on the path storagepolicyindex storage policy index for query reverse reverse the result order. include_deleted if True, include only deleted objects; if False (default), include only undeleted objects; otherwise, include both deleted and undeleted objects. since_row include only items whose ROWID is greater than the given row id; by default all rows are included. transform_func an optional function that if given will be called for each object to get a transformed version of the object to include in the listing; should have same signature as transformrecord(); defaults to transformrecord(). all_policies if True, include objects for all storage policies ignoring any value given for storagepolicyindex allow_reserved exclude names with reserved-byte by default list of tuples of (name, createdat, size, contenttype, etag, deleted) Turn this db record dict into the format this service uses for pending pickles. Merge items into the object table. itemlist list of dictionaries of {name, createdat, size, content_type, etag, deleted, storagepolicyindex, ctype_timestamp, meta_timestamp} source if defined, update incoming_sync with the source Merge shard ranges into the shard range table. shard_ranges a shard range or a list of shard ranges; each shard range should be an instance of ShardRange or a dict representation of a shard range having SHARDRANGEKEYS. Creates an object in the DB with its metadata. name object name to be created timestamp timestamp of when the object was created size object size content_type object content-type etag object etag deleted if True, marks the object as deleted and sets the deleted_at timestamp to timestamp storagepolicyindex the storage policy index for the object ctypetimestamp timestamp of when contenttype was last updated meta_timestamp timestamp of when metadata was last updated Reloads the cached list of valid on disk db files for this broker. Removes object records in the given namespace range from the object table. Note that objects are removed regardless of their storagepolicyindex. lower defines the lower bound of object names that will be removed; names greater than this value will be removed; names less than or equal to this value will not be removed. upper defines the upper bound of object names that will be removed; names less than or equal to this value will be removed; names greater than this value will not be removed. The empty string is interpreted as there being no upper bound. maxrow if specified only rows less than or equal to maxrow will be removed Update reported stats, available with containers" }, { "data": "puttimestamp puttimestamp to update deletetimestamp deletetimestamp to update objectcount objectcount to update bytesused bytesused to update Given a list of values each of which may be the name of a state, the number of a state, or an alias, return the set of state numbers described by the list. The following alias values are supported: listing maps to all states that are considered valid when listing objects; updating maps to all states that are considered valid for redirecting an object update; auditing maps to all states that are considered valid for a shard container that is updating its own shard range table from a root (this currently maps to all states except FOUND). states a list of values each of which may be the name of a state, the number of a state, or an alias a set of integer state numbers, or None if no states are given ValueError if any value in the given list is neither a valid state nor a valid alias Unlinks the brokers retiring DB file. True if the retiring DB was successfully unlinked, False otherwise. Creates and initializes a fresh DB file in preparation for sharding a retiring DB. The brokers own shard range must have an epoch timestamp for this method to succeed. True if the fresh DB was successfully created, False otherwise. Updates the brokers metadata stored under the given key prefixed with a sharding specific namespace. key metadata key in the sharding metadata namespace. value metadata value Update the containerstat policyindex and statuschangedat. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated, False otherwise. Returns True if a broker has shard range state that would be necessary for sharding to have been initiated but has not yet completed sharding, False otherwise. Compares sharddata with existing and updates sharddata with any items of existing that take precedence over the corresponding item in shard_data. shard_data a dict representation of shard range that may be modified by this method. existing a dict representation of shard range. True if shard data has any item(s) that are considered to take precedence over the corresponding item in existing Compares new and existing shard ranges, updating the new shard ranges with any more recent state from the existing, and returns shard ranges sorted into those that need adding because they contain new or updated state and those that need deleting because their state has been superseded. newshardranges a list of dicts, each of which represents a shard range. existingshardranges a dict mapping shard range names to dicts representing a shard range. a tuple (toadd, todelete); to_add is a list of dicts, each of which represents a shard range that is to be added to the existing shard ranges; to_delete is a set of shard range names that are to be deleted. Compare the data and meta related timestamps of a new object item with the timestamps of an existing object record, and update the new item with data and/or meta related attributes from the existing record if their timestamps are newer. The multiple timestamps are encoded into a single string for storing in the created_at column of the objects db table. new_item A dict of object update attributes existing A dict of existing object attributes True if any attributes of the new item dict were found to be newer than the existing and therefore not updated, otherwise False implying that the updated item is equal to the" }, { "data": "Disk File Interface for the Swift Object Server The DiskFile, DiskFileWriter and DiskFileReader classes combined define the on-disk abstraction layer for supporting the object server REST API interfaces (excluding REPLICATE). Other implementations wishing to provide an alternative backend for the object server must implement the three classes. An example alternative implementation can be found in the memserver.py and memdiskfile.py modules along size this one. The DiskFileManager is a reference implemenation specific class and is not part of the backend API. The remaining methods in this module are considered implementation specific and are also not considered part of the backend API. Represents an object location to be audited. Other than being a bucket of data, the only useful thing this does is stringify to a filesystem path so the auditors logs look okay. Manage object files. This specific implementation manages object files on a disk formatted with a POSIX-compliant file system that supports extended attributes as metadata on a file or directory. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. The following path format is used for data file locations: <devicespath/<devicedir>/<datadir>/<partdir>/<suffixdir>/<hashdir>/ <datafile>.<ext> mgr associated DiskFileManager instance device_path path to the target device or drive partition partition on the device in which the object lives account account name for the object container container name for the object obj object name for the object _datadir override the full datadir otherwise constructed here policy the StoragePolicy instance use_splice if true, use zero-copy splice() to send data pipe_size size of pipe buffer used in zero-copy operations open_expired if True, open() will not raise a DiskFileExpired if object is expired nextpartpower the next partition power to be used Context manager to create a file. We create a temporary file first, and then return a DiskFileWriter object to encapsulate the state. Note An implementation is not required to perform on-disk preallocations even if the parameter is specified. But if it does and it fails, it must raise a DiskFileNoSpace exception. size optional initial size of file to explicitly allocate on disk extension file extension to use for the newly-created file; defaults to .data for the sake of tests DiskFileNoSpace if a size is specified and allocation fails Delete the object. This implementation creates a tombstone file using the given timestamp, and removes any older versions of the object file. Any file that has an older timestamp than timestamp will be deleted. Note An implementation is free to use or ignore the timestamp parameter. timestamp timestamp to compare with each file DiskFileError this implementation will raise the same errors as the create() method. Provides the timestamp of the newest data file found in the object directory. A Timestamp instance, or None if no data file was found. DiskFileNotOpen if the open() method has not been previously called on this instance. Provide the datafile metadata for a previously opened object as a dictionary. This is metadata that was included when the object was first PUT, and does not include metadata set by any subsequent POST. objects datafile metadata dictionary DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Provide the metadata for a previously opened object as a dictionary. objects metadata dictionary DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Provide the metafile metadata for a previously opened object as a dictionary. This is metadata that was written by a POST and does not include any persistent metadata that was set by the original PUT. objects" }, { "data": "file metadata dictionary, or None if there is no .meta file DiskFileNotOpen if the swift.obj.diskfile.DiskFile.open() method was not previously invoked Open the object. This implementation opens the data file representing the object, reads the associated metadata in the extended attributes, additionally combining metadata from fast-POST .meta files. modernize if set, update this diskfile to the latest format. Currently, this means adding metadata checksums if none are present. current_time Unix time used in checking expiration. If not present, the current time will be used. Note An implementation is allowed to raise any of the following exceptions, but is only required to raise DiskFileNotExist when the object representation does not exist. DiskFileCollision on name mis-match with metadata DiskFileNotExist if the object does not exist DiskFileDeleted if the object was previously deleted DiskFileQuarantined if while reading metadata of the file some data did pass cross checks itself for use as a context manager Return the metadata for an object without requiring the caller to open the object first. current_time Unix time used in checking expiration. If not present, the current time will be used. metadata dictionary for an object DiskFileError this implementation will raise the same errors as the open() method. Return a swift.common.swob.Response class compatible app_iter object as defined by swift.obj.diskfile.DiskFileReader. For this implementation, the responsibility of closing the open file is passed to the swift.obj.diskfile.DiskFileReader object. keep_cache callers preference for keeping data read in the OS buffer cache cooperative_period the period parameter for cooperative yielding during file read quarantinehook 1-arg callable called when obj quarantined; the arg is the reason for quarantine. Default is to ignore it. Not needed by the REST layer. a swift.obj.diskfile.DiskFileReader object Write a block of metadata to an object without requiring the caller to create the object first. Supports fast-POST behavior semantics. metadata dictionary of metadata to be associated with the object DiskFileError this implementation will raise the same errors as the create() method. Management class for devices, providing common place for shared parameters and methods not provided by the DiskFile class (which primarily services the object server REST API layer). The get_diskfile() method is how this implementation creates a DiskFile object. Note This class is reference implementation specific and not part of the pluggable on-disk backend API. Note TODO(portante): Not sure what the right name to recommend here, as manager seemed generic enough, though suggestions are welcome. conf caller provided configuration object logger caller provided logger Clean up on-disk files that are obsolete and gather the set of valid on-disk files for an object. hsh_path object hash path frag_index if set, search for a specific fragment index .data file, otherwise accept the first valid .data file a dict that may contain: valid on disk files keyed by their filename extension; a list of obsolete files stored under the key obsolete; a list of files remaining in the directory, reverse sorted, stored under the key files. Take whats in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid. partition_dir absolute path to partition dir containing hashes.pkl and hashes.invalid a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Construct the path to a device without checking if it is" }, { "data": "device name of target device full path to the device Return the path to a device, first checking to see if either it is a proper mount point, or at least a directory depending on the mount_check configuration option. device name of target device mount_check whether or not to check mountedness of device. Defaults to bool(self.mount_check). full path to the device, None if the path to the device is not a proper mount point or directory. Returns a BaseDiskFile instance for an object based on the objects partition, path parts and policy. device name of target device partition partition on device in which the object lives account account name for the object container container name for the object obj object name for the object policy the StoragePolicy instance Returns a tuple of (a DiskFile instance for an object at the given object_hash, the basenames of the files in the objects hash dir). Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead. device name of target device partition partition on the device in which the object lives object_hash the hash of an object path policy the StoragePolicy instance DiskFileNotExist if the object does not exist a tuple comprising (an instance of BaseDiskFile, a list of file basenames) Returns a BaseDiskFile instance for an object at the given AuditLocation. audit_location object location to be audited Returns a DiskFile instance for an object at the given object_hash. Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead. device name of target device partition partition on the device in which the object lives object_hash the hash of an object path policy the StoragePolicy instance DiskFileNotExist if the object does not exist an instance of BaseDiskFile device name of target device partition partition name suffixes a list of suffix directories to be recalculated policy the StoragePolicy instance skip_rehash just mark the suffixes dirty; return None a dictionary that maps suffix directories Given a simple list of files names, determine the files that constitute a valid fileset i.e. a set of files that defines the state of an object, and determine the files that are obsolete and could be deleted. Note that some files may fall into neither category. If a file is considered part of a valid fileset then its info dict will be added to the results dict, keyed by <extension>_info. Any files that are no longer required will have their info dicts added to a list stored under the key obsolete. The results dict will always contain entries with keys ts_file, datafile and metafile. Their values will be the fully qualified path to a file of the corresponding type if there is such a file in the valid fileset, or None. files a list of file names. datadir directory name files are from; this is used to construct file paths in the results, but the datadir is not modified by this method. verify if True verify that the ondisk file contract has not been violated, otherwise do not verify. policy storage policy used to store the files. Used to validate fragment indexes for EC policies. ts_file -> path to a .ts file or None data_file -> path to a .data file or None meta_file -> path to a .meta file or None ctype_file -> path to a .meta file or None ts_info -> a file info dict for a" }, { "data": "file data_info -> a file info dict for a .data file meta_info -> a file info dict for a .meta file ctype_info -> a file info dict for a .meta file which contains the content-type value unexpected -> a list of file paths for unexpected files possible_reclaim -> a list of file info dicts for possible reclaimable files obsolete -> a list of file info dicts for obsolete files Invalidates the hash for a suffix_dir in the partitions hashes file. suffix_dir absolute path to suffix dir whose hash needs invalidating Returns filename for given timestamp. timestamp the object timestamp, an instance of Timestamp ext an optional string representing a file extension to be appended to the returned file name ctype_timestamp an optional content-type timestamp, an instance of Timestamp a file name Yield an AuditLocation for all objects stored under device_dirs. policy the StoragePolicy instance device_dirs directory of target device auditor_type either ALL or ZBF Parse an on disk file name. filename the file name including extension policy storage policy used to store the file a dict, with keys for timestamp, ext and ctype_timestamp: timestamp is a Timestamp ctype_timestamp is a Timestamp or None for .meta files, otherwise None ext is a string, the file extension including the leading dot or the empty string if the filename has no extension. Subclasses may override this method to add further keys to the returned dict. DiskFileError if any part of the filename is not able to be validated. A context manager that will lock on the partition given. device device targeted by the lock request policy policy targeted by the lock request partition partition targeted by the lock request PartitionLockTimeout If the lock on the partition cannot be granted within the configured timeout. Write data describing a container update notification to a pickle file in the async_pending directory. device name of target device account account name for the object container container name for the object obj object name for the object data update data to be written to pickle file timestamp a Timestamp policy the StoragePolicy instance In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it. The path to the device the corrupted file is on. The path to the file you want quarantined. path (str) of directory the file was moved to OSError re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename A context manager that will lock on the partition and, if configured to do so, on the device given. device name of target device policy policy targeted by the replication request partition partition targeted by the replication request ReplicationLockTimeout If the lock on the device cannot be granted within the configured timeout. Yields tuples of (hash_only, timestamps) for object information stored for the given device, partition, and (optionally) suffixes. If suffixes is None, all stored suffixes will be searched for object hashes. Note that if suffixes is not None but empty, such as [], then nothing will be" }, { "data": "timestamps is a dict which may contain items mapping: ts_data -> timestamp of data or tombstone file, ts_meta -> timestamp of meta file, if one exists content-type value, if one exists durable -> True if data file at ts_data is durable, False otherwise where timestamps are instances of Timestamp device name of target device partition partition name policy the StoragePolicy instance suffixes optional list of suffix directories to be searched Yields tuples of (fullpath, suffixonly) for suffixes stored on the given device and partition. device name of target device partition partition name policy the StoragePolicy instance Encapsulation of the WSGI read context for servicing GET REST API requests. Serves as the context manager object for the swift.obj.diskfile.DiskFile classs swift.obj.diskfile.DiskFile.reader() method. Note The quarantining behavior of this method is considered implementation specific, and is not required of the API. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. fp open file object pointer reference data_file on-disk data file name for the object obj_size verified on-disk size of the object etag expected metadata etag value for entire file diskchunksize size of reads from disk in bytes keepcachesize maximum object size that will be kept in cache device_path on-disk device path, used when quarantining an obj logger logger caller wants this object to use quarantine_hook 1-arg callable called w/reason when quarantined use_splice if true, use zero-copy splice() to send data pipe_size size of pipe buffer used in zero-copy operations diskfile the diskfile creating this DiskFileReader instance keep_cache should resulting reads be kept in the buffer cache cooperative_period the period parameter when does cooperative yielding during file read Returns an iterator over the data file for range (start, stop) Returns an iterator over the data file for a set of ranges Close the open file handle if present. For this specific implementation, this method will handle quarantining the file if necessary. Does some magic with splice() and tee() to move stuff from disk to network without ever touching userspace. wsockfd file descriptor (integer) of the socket out which to send data Encapsulation of the write context for servicing PUT REST API requests. Serves as the context manager object for the swift.obj.diskfile.DiskFile classs swift.obj.diskfile.DiskFile.create() method. Note It is the responsibility of the swift.obj.diskfile.DiskFile.create() method context manager to close the open file descriptor. Note The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments. name name of object from REST API datadir on-disk directory object will end up in on swift.obj.diskfile.DiskFileWriter.put() fd open file descriptor of temporary file to receive data tmppath full path name of the opened file descriptor bytespersync number bytes written between sync calls diskfile the diskfile creating this DiskFileWriter instance nextpartpower the next partition power to be used extension the file extension to be used; may be used internally to distinguish between PUT/POST/DELETE operations Expose internal stats about written chunks. a tuple, (upload_size, etag) Perform any operations necessary to mark the object as durable. For replication policy type this is a no-op. timestamp object put timestamp, an instance of Timestamp Finalize writing the file on disk. metadata dictionary of metadata to be associated with the object Write a chunk of data to disk. All invocations of this method must come before invoking the :func: For this implementation, the data is written into a temporary file. chunk the chunk of data to write as a string object alias of DiskFileReader alias of DiskFileWriter alias of DiskFile Finalize writing the file on disk. metadata dictionary of metadata to be associated with the object Provides the timestamp of the newest durable file found in the object directory. A Timestamp instance, or None if no durable file was" }, { "data": "DiskFileNotOpen if the open() method has not been previously called on this instance. Provides information about all fragments that were found in the object directory, including fragments without a matching durable file, and including any fragment chosen to construct the opened diskfile. A dict mapping <Timestamp instance> -> <list of frag indexes>, or None if the diskfile has not been opened or no fragments were found. Remove a tombstone file matching the specified timestamp or datafile matching the specified timestamp and fragment index from the object directory. This provides the EC reconstructor/ssync process with a way to remove a tombstone or fragment from a handoff node after reverting it to its primary node. The hash will be invalidated, and if empty the hsh_path will be removed immediately. timestamp the object timestamp, an instance of Timestamp frag_index fragment archive index, must be a whole number or None. nondurablepurgedelay only remove a non-durable data file if its been on disk longer than this many seconds. meta_timestamp if not None then remove any meta file with this timestamp alias of ECDiskFileReader alias of ECDiskFileWriter alias of ECDiskFile Returns the EC specific filename for given timestamp. timestamp the object timestamp, an instance of Timestamp ext an optional string representing a file extension to be appended to the returned file name frag_index a fragment archive index, used with .data extension only, must be a whole number. ctype_timestamp an optional content-type timestamp, an instance of Timestamp durable if True then include a durable marker in data filename. a file name DiskFileError if ext==.data and the kwarg frag_index is not a whole number Returns timestamp(s) and other info extracted from a policy specific file name. For EC policy the data file name includes a fragment index and possibly a durable marker, both of which must be stripped off to retrieve the timestamp. filename the file name including extension ctype_timestamp: timestamp is a Timestamp frag_index is an int or None ctype_timestamp is a Timestamp or None for .meta files, otherwise None ext is a string, the file extension including the leading dot or the empty string if the filename has no extension durable is a boolean that is True if the filename is a data file that includes a durable marker DiskFileError if any part of the filename is not able to be validated. Return int representation of frag_index, or raise a DiskFileError if frag_index is not a whole number. frag_index a fragment archive index policy storage policy used to validate the index against Finalize put by renaming the object data file to include a durable marker. We do this for EC policy because it requires a 2-phase put commit confirmation. timestamp object put timestamp, an instance of Timestamp DiskFileError if the diskfile frag_index has not been set (either during initialisation or a call to put()) The only difference between this method and the replication policy DiskFileWriter method is adding the frag index to the metadata. metadata dictionary of metadata to be associated with object Take whats in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid. partition_dir absolute path to partition dir containing hashes.pkl and hashes.invalid a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Extracts the policy for an object (based on the name of the objects directory) given the device-relative path to the" }, { "data": "Returns None in the event that the path is malformed in some way. The device-relative path is everything after the mount point; for example: 485dc017205a81df3af616d917c90179/1401811134.873649.data would have device-relative path: objects-5/30/179/485dc017205a81df3af616d917c90179/1401811134.873649.data obj_path device-relative path of an object, or the full path a BaseStoragePolicy or None Get the async dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. asyncpending or asyncpending-<N> as appropriate Get the data dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. objects or objects-<N> as appropriate Given the device path, policy, and partition, returns the full path to the partition Get the temp dir for the given policy. policyorindex StoragePolicy instance, or an index (string or int); if None, the legacy Policy-0 is assumed. tmp or tmp-<N> as appropriate Invalidates the hash for a suffix_dir in the partitions hashes file. suffix_dir absolute path to suffix dir whose hash needs invalidating Given a devices path (e.g. /srv/node), yield an AuditLocation for all objects stored under that directory for the given datadir (policy), if devicedirs isnt set. If devicedirs is set, only yield AuditLocation for the objects under the entries in device_dirs. The AuditLocation only knows the path to the hash directory, not to the .data file therein (if any). This is to avoid a double listdir(hash_dir); the DiskFile object will always do one, so we dont. devices parent directory of the devices to be audited datadir objects directory mount_check flag to check if a mount check should be performed on devices logger a logger object device_dirs a list of directories under devices to traverse auditor_type either ALL or ZBF In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it. The path to the device the corrupted file is on. The path to the file you want quarantined. path (str) of directory the file was moved to OSError re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename Read the existing hashes.pkl a dict, the suffix hashes (if any), the key valid will be False if hashes.pkl is corrupt, cannot be read or does not exist Helper function to read the pickled metadata from an object file. fd file descriptor or filename to load the metadata from addmissingchecksum if set and checksum is missing, add it dictionary of metadata Hard-links a file located in target_path using the second path newtargetpath. Creates intermediate directories if required. target_path current absolute filename newtargetpath new absolute filename for the hardlink ignore_missing if True then no exception is raised if the link could not be made because target_path did not exist, otherwise an OSError will be raised. OSError if the hard link could not be created, unless the intended hard link already exists or the target_path does not exist and must_exist if False. True if the link was created by the call to this method, False otherwise. Write hashes to hashes.pkl The updated key is added to hashes before it is written. Helper function to write pickled metadata for an object file. fd file descriptor or filename to write the metadata metadata metadata to write Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "search.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "account_quotas is a middleware which blocks write requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. account_quotas uses the x-account-meta-quota-bytes metadata entry to store the overall account quota. Write requests to this metadata entry are only permitted for resellers. There is no overall account quota limit if x-account-meta-quota-bytes is not set. Additionally, account quotas may be set for each storage policy, using metadata of the form x-account-quota-bytes-policy-<policy name>. Again, only resellers may update these metadata, and there will be no limit for a particular policy if the corresponding metadata is not set. Note Per-policy quotas need not sum to the overall account quota, and the sum of all Container quotas for a given policy need not sum to the accounts policy quota. The account_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth accountquotas proxy-server [filter:account_quotas] use = egg:swift#account_quotas ``` To set the quota on an account: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes:10000 ``` Remove the quota: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret post -m quota-bytes: ``` The same limitations apply for the account quotas as for the container quotas. For example, when uploading an object without a content-length header the proxy server doesnt know the final size of the currently uploaded object and the upload will be allowed if the current account size is within the quota. Due to the eventual consistency further uploads might be possible until the account size has been updated. Bases: object Account quota middleware See above for a full description. Returns a WSGI filter app for use with paste.deploy. The s3api middleware will emulate the S3 REST api on top of swift. To enable this middleware to your configuration, add the s3api middleware in front of the auth middleware. See proxy-server.conf-sample for more detail and configurable options. To set up your client, ensure you are using the tempauth or keystone auth system for swift project. When your swift on a SAIO environment, make sure you have setting the tempauth middleware configuration in proxy-server.conf, and the access key will be the concatenation of the account and user strings that should look like test:tester, and the secret access key is the account password. The host should also point to the swift storage hostname. The tempauth option example: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing ``` An example client using tempauth with the python boto library is as follows: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='test:tester', awssecretaccess_key='testing', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` And if you using keystone auth, you need the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard or by openstack ec2 command. Here is showing to create an EC2 credential: ``` +++ | Field | Value | +++ | access | c2e30f2cd5204b69a39b3f1130ca8f61 | | links | {u'self': u'http://controller:5000/v3/......'} | | project_id | 407731a6c2d0425c86d1e7f12a900488 | | secret | baab242d192a4cd6b68696863e07ed59 | | trust_id | None | | user_id | 00f0ee06afe74f81b410f3fe03d34fbc | +++ ``` An example client using keystone auth with the python boto library will be: ``` from boto.s3.connection import S3Connection connection = S3Connection( awsaccesskey_id='c2e30f2cd5204b69a39b3f1130ca8f61', awssecretaccess_key='baab242d192a4cd6b68696863e07ed59', port=8080, host='127.0.0.1', is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat()) ``` Set s3api before your auth in your pipeline in proxy-server.conf file. To enable all compatibility currently supported, you should make sure that bulk, slo, and your auth middleware are also included in your proxy pipeline" }, { "data": "Using tempauth, the minimum example config is: ``` [pipeline:main] pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging proxy-server ``` When using keystone, the config will be: ``` [pipeline:main] pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk slo proxy-logging proxy-server ``` Finally, add the s3api middleware section: ``` [filter:s3api] use = egg:swift#s3api ``` Note keystonemiddleware.authtoken can be located before/after s3api but we recommend to put it before s3api because when authtoken is after s3api, both authtoken and s3token will issue the acceptable token to keystone (i.e. authenticate twice). And in the keystonemiddleware.authtoken middleware , you should set delayauthdecision option to True. Currently, the s3api is being ported from https://github.com/openstack/swift3 so any existing issues in swift3 are still remaining. Please make sure descriptions in the example proxy-server.conf and what happens with the config, before enabling the options. The compatibility will continue to be improved upstream, you can keep and eye on compatibility via a check tool build by SwiftStack. See https://github.com/swiftstack/s3compat in detail. Bases: object S3Api: S3 compatibility middleware Check that required filters are present in order in the pipeline. Check that proxy-server.conf has an appropriate pipeline for s3api. Standard filter factory to use the middleware with paste.deploy s3token middleware is for authentication with s3api + keystone. This middleware: Gets a request from the s3api middleware with an S3 Authorization access key. Validates s3 token with Keystone. Transforms the account name to AUTH%(tenantname). Optionally can retrieve and cache secret from keystone to validate signature locally Note If upgrading from swift3, the auth_version config option has been removed, and the auth_uri option now includes the Keystone API version. If you previously had a configuration like ``` [filter:s3token] use = egg:swift3#s3token auth_uri = https://keystonehost:35357 auth_version = 3 ``` you should now use ``` [filter:s3token] use = egg:swift#s3token auth_uri = https://keystonehost:35357/v3 ``` Bases: object Middleware that handles S3 authentication. Returns a WSGI filter app for use with paste.deploy. Bases: object wsgi.input wrapper to verify the hash of the input as its read. Bases: S3Request S3Acl request object. authenticate method will run pre-authenticate request and retrieve account information. Note that it currently supports only keystone and tempauth. (no support for the third party authentication middleware) Wrapper method of getresponse to add s3 acl information from response sysmeta headers. Wrap up get_response call to hook with acl handling method. Create a Swift request based on this requests environment. Bases: BaseException Client provided a X-Amz-Content-SHA256, but it doesnt match the data. Inherit from BaseException (rather than Exception) so it cuts from the proxy-server app (which will presumably be the one reading the input) through all the layers of the pipeline back to us. It should never escape the s3api middleware. Bases: Request S3 request object. swob.Request.body is not secure against malicious input. It consumes too much memory without any check when the request body is excessively large. Use xml() instead. Get and set the container acl property checkcopysource checks the copy source existence and if copying an object to itself, for illegal request parameters the source HEAD response getcontainerinfo will return a result dict of getcontainerinfo from the backend Swift. a dictionary of container info from swift.controllers.base.getcontainerinfo NoSuchBucket when the container doesnt exist InternalError when the request failed without 404 get_response is an entry point to be extended for child classes. If additional tasks needed at that time of getting swift response, we can override this method. swift.common.middleware.s3api.s3request.S3Request need to just call getresponse to get pure swift response. Get and set the object acl property S3Timestamp from Date" }, { "data": "If X-Amz-Date header specified, it will be prior to Date header. :return : S3Timestamp instance Create a Swift request based on this requests environment. Get the partNumber param, if it exists, and check it is valid. To be valid, a partNumber must satisfy two criteria. First, it must be an integer between 1 and the maximum allowed parts, inclusive. The maximum allowed parts is the maximum of the configured maxuploadpartnum and, if given, partscount. Second, the partNumber must be less than or equal to the parts_count, if it is given. parts_count if given, this is the number of parts in an existing object. InvalidPartArgument if the partNumber param is invalid i.e. less than 1 or greater than the maximum allowed parts. InvalidPartNumber if the partNumber param is valid but greater than num_parts. an integer part number if the partNumber param exists, otherwise None. Similar to swob.Request.body, but it checks the content length before creating a body string. Bases: object A request class mixin to provide S3 signature v4 functionality Return timestamp string according to the auth type The difference from v2 is v4 have to see X-Amz-Date even though its query auth type. Bases: SigV4Mixin, S3Request Bases: SigV4Mixin, S3AclRequest Helper function to find a request class to use from Map Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, HTTPException S3 error object. Reference information about S3 errors is available at: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Bases: ErrorResponse Bases: HeaderKeyDict Similar to the Swifts normal HeaderKeyDict class, but its key name is normalized as S3 clients expect. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: InvalidArgument Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: S3ResponseBase, Response Similar to the Response class in Swift, but uses our HeaderKeyDict for headers instead of Swifts HeaderKeyDict. This also translates Swift specific headers to S3 headers. Create a new S3 response object based on the given Swift response. Bases: object Base class for swift3 responses. Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: ErrorResponse Bases: BucketNotEmpty Bases: S3Exception Bases: S3Exception Bases: S3Exception Bases: Exception Bases: ElementBase Wrapper Element class of lxml.etree.Element to support a utf-8 encoded non-ascii string as a text. Why we need this?: Original lxml.etree.Element supports only unicode for the text. It declines maintainability because we have to call a lot of encode/decode methods to apply account/container/object name (i.e. PATH_INFO) to each Element instance. When using this class, we can remove such a redundant codes from swift.common.middleware.s3api middleware. utf-8 wrapper property of lxml.etree.Element.text Bases: dict If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a" }, { "data": "method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] Bases: Timestamp this format should be like YYYYMMDDThhmmssZ mktime creates a float instance in epoch time really like as time.mktime the difference from time.mktime is allowing to 2 formats string for the argument for the S3 testing usage. TODO: support timestamp_str a string of timestamp formatted as (a) RFC2822 (e.g. date header) (b) %Y-%m-%dT%H:%M:%S (e.g. copy result) time_format a string of format to parse in (b) process a float instance in epoch time Returns the system metadata header for given resource type and name. Returns the system metadata prefix for given resource type. Validates the name of the bucket against S3 criteria, http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html True is valid, False is invalid. s3api uses a different implementation approach to achieve S3 ACLs. First, we should understand what we have to design to achieve real S3 ACLs. Current s3api(real S3)s ACLs Model is as follows: ``` AccessControlPolicy: Owner: AccessControlList: Grant[n]: (Grantee, Permission) ``` Each bucket or object has its own acl consisting of Owner and AcessControlList. AccessControlList can contain some Grants. By default, AccessControlList has only one Grant to allow FULL CONTROLL to owner. Each Grant includes single pair with Grantee, Permission. Grantee is the user (or user group) allowed the given permission. This module defines the groups and the relation tree. If you wanna get more information about S3s ACLs model in detail, please see official documentation here, http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html Bases: object S3 ACL class. http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html): The sample ACL includes an Owner element identifying the owner via the AWS accounts canonical user ID. The Grant element identifies the grantee (either an AWS account or a predefined group), and the permission granted. This default ACL has one Grant element for the owner. You grant permissions by adding Grant elements, each grant identifying the grantee and the permission. Check that the user is an owner. Check that the user has a permission. Decode the value to an ACL instance. Convert an ElementTree to an ACL instance Convert HTTP headers to an ACL instance. Bases: Group Access permission to this group allows anyone to access the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request. Note: s3api regards unsigned requests as Swift API accesses, and bypasses them to Swift. As a result, AllUsers behaves completely same as AuthenticatedUsers. Bases: Group This group represents all AWS accounts. Access permission to this group allows any AWS account to access the resource. However, all requests must be signed (authenticated). Bases: object A dict-like object that returns canned ACL. Bases: object Grant Class which includes both Grantee and Permission Create an etree element. Convert an ElementTree to an ACL instance Bases: object Base class for grantee. Methods: init: create a Grantee instance elem: create an ElementTree from itself Static Methods: to an Grantee instance. from_elem: convert a ElementTree to an Grantee instance. Get an etree element of this instance. Convert a grantee string in the HTTP header to an Grantee instance. Bases: Grantee Base class for Amazon S3 Predefined Groups Get an etree element of this instance. Bases: Group WRITE and READ_ACP permissions on a bucket enables this group to write server access logs to the bucket. Bases: object Owner class for S3 accounts Bases: Grantee Canonical user class for S3 accounts. Get an etree element of this instance. A set of predefined grants supported by AWS S3. Decode Swift metadata to an ACL instance. Given a resource type and HTTP headers, this method returns an ACL instance. Encode an ACL instance to Swift" }, { "data": "Given a resource type and an ACL instance, this method returns HTTP headers, which can be used for Swift metadata. Convert a URI to one of the predefined groups. To make controller classes clean, we need these handlers. It is really useful for customizing acl checking algorithms for each controller. BaseAclHandler wraps basic Acl handling. (i.e. it will check acl from ACL_MAP by using HEAD) Make a handler with the name of the controller. (e.g. BucketAclHandler is for BucketController) It consists of method(s) for actual S3 method on controllers as follows. Example: ``` class BucketAclHandler(BaseAclHandler): def PUT: << put acl handling algorithms here for PUT bucket >> ``` Note If the method DONT need to recall getresponse in outside of acl checking, the method have to return the response it needs at the end of method. Bases: object BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body. Bases: BaseAclHandler BucketAclHandler: Handler for BucketController Bases: BaseAclHandler MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController Bases: BaseAclHandler MultiUpload stuff requires acl checking just once for BASE container so that MultiUploadAclHandler extends BaseAclHandler to check acl only when the verb defined. We should define the verb as the first step to request to backend Swift at incoming request. BASE container name is always w/o MULTIUPLOAD_SUFFIX Any check timing is ok but we should check it as soon as possible. | Controller | Verb | CheckResource | Permission | |:-|:-|:-|:-| | Part | PUT | Container | WRITE | | Uploads | GET | Container | READ | | Uploads | POST | Container | WRITE | | Upload | GET | Container | READ | | Upload | DELETE | Container | WRITE | | Upload | POST | Container | WRITE | Controller Verb CheckResource Permission Part PUT Container WRITE Uploads GET Container READ Uploads POST Container WRITE Upload GET Container READ Upload DELETE Container WRITE Upload POST Container WRITE Bases: BaseAclHandler ObjectAclHandler: Handler for ObjectController Bases: MultiUploadAclHandler PartAclHandler: Handler for PartController Bases: BaseAclHandler S3AclHandler: Handler for S3AclController Bases: MultiUploadAclHandler UploadAclHandler: Handler for UploadController Bases: MultiUploadAclHandler UploadsAclHandler: Handler for UploadsController Handle the x-amz-acl header. Note that this header currently used for only normal-acl (not implemented) on s3acl. TODO: add translation to swift acl like as x-container-read to s3acl Takes an S3 style ACL and returns a list of header/value pairs that implement that ACL in Swift, or NotImplemented if there isnt a way to do that yet. Bases: object Base WSGI controller class for the middleware Returns the target resource type of this controller. Bases: Controller Handles unsupported requests. A decorator to ensure that the request is a bucket operation. If the target resource is an object, this decorator updates the request by default so that the controller handles it as a bucket operation. If err_resp is specified, this raises it on error instead. A decorator to ensure the container existence. A decorator to ensure that the request is an object operation. If the target resource is not an object, this raises an error response. Bases: Controller Handles account level requests. Handle GET Service request Bases: Controller Handles bucket" }, { "data": "Handle DELETE Bucket request Handle GET Bucket (List Objects) request Handle HEAD Bucket (Get Metadata) request Handle POST Bucket request Handle PUT Bucket request Bases: Controller Handles requests on objects Handle DELETE Object request Handle GET Object request Handle HEAD Object request Handle PUT Object and PUT Object (Copy) request Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Attempts to construct an S3 ACL based on what is found in the swift headers Bases: Controller Handles the following APIs: GET Bucket acl PUT Bucket acl GET Object acl PUT Object acl Those APIs are logged as ACL operations in the S3 server log. Handles GET Bucket acl and GET Object acl. Handles PUT Bucket acl and PUT Object acl. Implementation of S3 Multipart Upload. This module implements S3 Multipart Upload APIs with the Swift SLO feature. The following explains how S3api uses swift container and objects to store S3 upload information: A container to store upload information. [bucket] is the original bucket where multipart upload is initiated. An object of the ongoing upload id. The object is empty and used for checking the target upload status. If the object exists, it means that the upload is initiated but not either completed or aborted. The last suffix is the part number under the upload id. When the client uploads the parts, they will be stored in the namespace with [bucket]+segments/[uploadid]/[partnumber]. Example listing result in the [bucket]+segments container: ``` [bucket]+segments/[uploadid1] # upload id object for uploadid1 [bucket]+segments/[uploadid1]/1 # part object for uploadid1 [bucket]+segments/[uploadid1]/2 # part object for uploadid1 [bucket]+segments/[uploadid1]/3 # part object for uploadid1 [bucket]+segments/[uploadid2] # upload id object for uploadid2 [bucket]+segments/[uploadid2]/1 # part object for uploadid2 [bucket]+segments/[uploadid2]/2 # part object for uploadid2 . . ``` Those part objects are directly used as segments of a Swift Static Large Object when the multipart upload is completed. Bases: Controller Handles the following APIs: Upload Part Upload Part - Copy Those APIs are logged as PART operations in the S3 server log. Handles Upload Part and Upload Part Copy. Bases: Controller Handles the following APIs: List Parts Abort Multipart Upload Complete Multipart Upload Those APIs are logged as UPLOAD operations in the S3 server log. Handles Abort Multipart Upload. Handles List Parts. Handles Complete Multipart Upload. Bases: Controller Handles the following APIs: List Multipart Uploads Initiate Multipart Upload Those APIs are logged as UPLOADS operations in the S3 server log. Handles List Multipart Uploads Handles Initiate Multipart Upload. Bases: Controller Handles Delete Multiple Objects, which is logged as a MULTIOBJECTDELETE operation in the S3 server log. Handles Delete Multiple Objects. Bases: Controller Handles the following APIs: GET Bucket versioning PUT Bucket versioning Those APIs are logged as VERSIONING operations in the S3 server log. Handles GET Bucket versioning. Handles PUT Bucket versioning. Bases: Controller Handles GET Bucket location, which is logged as a LOCATION operation in the S3 server log. Handles GET Bucket location. Bases: Controller Handles the following APIs: GET Bucket logging PUT Bucket logging Those APIs are logged as LOGGING_STATUS operations in the S3 server log. Handles GET Bucket logging. Handles PUT Bucket logging. Bases: object Backend rate-limiting middleware. Rate-limits requests to backend storage node devices. Each (device, request method) combination is independently rate-limited. All requests with a GET, HEAD, PUT, POST, DELETE, UPDATE or REPLICATE method are rate limited on a per-device basis by both a method-specific rate and an overall device rate limit. If a request would cause the rate-limit to be exceeded for the method and/or device then a response with a 529 status code is returned. Middleware that will perform many operations on a single request. Expand tar files into a Swift" }, { "data": "Request must be a PUT with the query parameter ?extract-archive=format specifying the format of archive file. Accepted formats are tar, tar.gz, and tar.bz2. For a PUT to the following url: ``` /v1/AUTHAccount/$UPLOADPATH?extract-archive=tar.gz ``` UPLOADPATH is where the files will be expanded to. UPLOADPATH can be a container, a pseudo-directory within a container, or an empty string. The destination of a file in the archive will be built as follows: ``` /v1/AUTHAccount/$UPLOADPATH/$FILE_PATH ``` Where FILE_PATH is the file name from the listing in the tar file. If the UPLOAD_PATH is an empty string, containers will be auto created accordingly and files in the tar that would not map to any container (files in the base directory) will be ignored. Only regular files will be uploaded. Empty directories, symlinks, etc will not be uploaded. If the content-type header is set in the extract-archive call, Swift will assign that content-type to all the underlying files. The bulk middleware will extract the archive file and send the internal files using PUT operations using the same headers from the original request (e.g. auth-tokens, content-Type, etc.). Notice that any middleware call that follows the bulk middleware does not know if this was a bulk request or if these were individual requests sent by the user. In order to make Swift detect the content-type for the files based on the file extension, the content-type in the extract-archive call should not be set. Alternatively, it is possible to explicitly tell Swift to detect the content type using this header: ``` X-Detect-Content-Type: true ``` For example: ``` curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar -T backup.tar -H \"Content-Type: application/x-tar\" -H \"X-Auth-Token: xxx\" -H \"X-Detect-Content-Type: true\" ``` The tar file format (1) allows for UTF-8 key/value pairs to be associated with each file in an archive. If a file has extended attributes, then tar will store those as key/value pairs. The bulk middleware can read those extended attributes and convert them to Swift object metadata. Attributes starting with user.meta are converted to object metadata, and user.mime_type is converted to Content-Type. For example: ``` setfattr -n user.mime_type -v \"application/python-setup\" setup.py setfattr -n user.meta.lunch -v \"burger and fries\" setup.py setfattr -n user.meta.dinner -v \"baked ziti\" setup.py setfattr -n user.stuff -v \"whee\" setup.py ``` Will get translated to headers: ``` Content-Type: application/python-setup X-Object-Meta-Lunch: burger and fries X-Object-Meta-Dinner: baked ziti ``` The bulk middleware will handle xattrs stored by both GNU and BSD tar (2). Only xattrs user.mime_type and user.meta.* are processed. Other attributes are ignored. In addition to the extended attributes, the object metadata and the x-delete-at/x-delete-after headers set in the request are also assigned to the extracted objects. Notes: (1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar 1.27.1 or later. (2) Even with pax-format tarballs, different encoders store xattrs slightly differently; for example, GNU tar stores the xattr user.userattribute as pax header SCHILY.xattr.user.userattribute, while BSD tar (which uses libarchive) stores it as LIBARCHIVE.xattr.user.userattribute. The response from bulk operations functions differently from other Swift responses. This is because a short request body sent from the client could result in many operations on the proxy server and precautions need to be made to prevent the request from timing out due to lack of activity. To this end, the client will always receive a 200 OK response, regardless of the actual success of the call. The body of the response must be parsed to determine the actual success of the operation. In addition to this the client may receive zero or more whitespace characters prepended to the actual response body while the proxy server is completing the" }, { "data": "The format of the response body defaults to text/plain but can be either json or xml depending on the Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. An example body is as follows: ``` {\"Response Status\": \"201 Created\", \"Response Body\": \"\", \"Errors\": [], \"Number Files Created\": 10} ``` If all valid files were uploaded successfully the Response Status will be 201 Created. If any files failed to be created the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In both cases the response body will specify the number of files successfully uploaded and a list of the files that failed. There are proxy logs created for each file (which becomes a subrequest) in the tar. The subrequests proxy log will have a swift.source set to EA the logs content length will reflect the unzipped size of the file. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the unexpanded size of the tar.gz). Will delete multiple objects or containers from their account with a single request. Responds to POST requests with query parameter ?bulk-delete set. The request url is your storage url. The Content-Type should be set to text/plain. The body of the POST request will be a newline separated list of url encoded objects to delete. You can delete 10,000 (configurable) objects per request. The objects specified in the POST request body must be URL encoded and in the form: ``` /containername/objname ``` or for a container (which must be empty at time of delete): ``` /container_name ``` The response is similar to extract archive as in every response will be a 200 OK and you must parse the response body for actual results. An example response is: ``` {\"Number Not Found\": 0, \"Response Status\": \"200 OK\", \"Response Body\": \"\", \"Errors\": [], \"Number Deleted\": 6} ``` If all items were successfully deleted (or did not exist), the Response Status will be 200 OK. If any failed to delete, the response code corresponds to the subrequests error. Possible codes are 400, 401, 502 (on server errors), etc. In all cases the response body will specify the number of items successfully deleted, not found, and a list of those that failed. The return body will be formatted in the way specified in the requests Accept header. Acceptable formats are text/plain, application/json, application/xml, and text/xml. There are proxy logs created for each object or container (which becomes a subrequest) that is deleted. The subrequests proxy log will have a swift.source set to BD the logs content length of 0. If double proxy-logging is used the leftmost logger will not have a swift.source set and the content length will reflect the size of the payload sent to the proxy (the list of objects/containers to be deleted). Bases: Exception Returns a properly formatted response body according to format. Handles json and xml, otherwise will return text/plain. Note: xml response does not include xml declaration. resulting format generated data about results. list of quoted filenames that failed the tag name to use for root elements when returning XML; e.g. extract or delete Bases: Exception Bases: object Middleware that provides high-level error handling and ensures that a transaction id will be set for every request. Bases: WSGIContext Enforces that inner_iter yields exactly <nbytes> bytes before exhaustion. If inner_iter fails to do so, BadResponseLength is" }, { "data": "inner_iter iterable of bytestrings nbytes number of bytes expected CNAME Lookup Middleware Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domains CNAME record in DNS. This middleware will continue to follow a CNAME chain in DNS until it finds a record ending in the configured storage domain or it reaches the configured maximum lookup depth. If a match is found, the environments Host header is rewritten and the request is passed further down the WSGI chain. Bases: object CNAME Lookup Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Given a domain, returns its DNS CNAME mapping and DNS ttl. domain domain to query on resolver dns.resolver.Resolver() instance used for executing DNS queries (ttl, result) The container_quotas middleware implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check. Any object PUT operations that exceed these quotas return a 413 response (request entity too large) with a descriptive body. Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second ttl by default), and its unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers will be refused). Quotas are set by adding meta values to the container, and are validated when set: | Metadata | Use | |:--|:--| | X-Container-Meta-Quota-Bytes | Maximum size of the container, in bytes. | | X-Container-Meta-Quota-Count | Maximum object count of the container. | Metadata Use X-Container-Meta-Quota-Bytes Maximum size of the container, in bytes. X-Container-Meta-Quota-Count Maximum object count of the container. The container_quotas middleware should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. For example: ``` [pipeline:main] pipeline = catcherrors cache tempauth containerquotas proxy-server [filter:container_quotas] use = egg:swift#container_quotas ``` Bases: object WSGI middleware that validates an incoming container sync request using the container-sync-realms.conf style of container sync. Bases: object Cross domain middleware used to respond to requests for cross domain policy information. If the path is /crossdomain.xml it will respond with an xml cross domain policy document. This allows web pages hosted elsewhere to use client side technologies such as Flash, Java and Silverlight to interact with the Swift API. To enable this middleware, add it to the pipeline in your proxy-server.conf file. It should be added before any authentication (e.g., tempauth or keystone) middleware. In this example ellipsis () indicate other middleware you may have chosen to use: ``` [pipeline:main] pipeline = ... crossdomain ... authtoken ... proxy-server ``` And add a filter section, such as: ``` [filter:crossdomain] use = egg:swift#crossdomain crossdomainpolicy = <allow-access-from domain=\"*.example.com\" /> <allow-access-from domain=\"www.example.com\" secure=\"false\" /> ``` For continuation lines, put some whitespace before the continuation text. Ensure you put a completely blank line to terminate the crossdomainpolicy value. The crossdomainpolicy name/value is optional. If omitted, the policy defaults as if you had specified: ``` crossdomainpolicy = <allow-access-from domain=\"*\" secure=\"false\" /> ``` Note The default policy is very permissive; this is appropriate for most public cloud deployments, but may not be appropriate for all deployments. See also: CWE-942 Returns a 200 response with cross domain policy information Swift will by default provide clients with an interface providing details about the installation. Unless disabled" }, { "data": "expose_info=false in Proxy Server Configuration), a GET request to /info will return configuration data in JSON format. An example response: ``` {\"swift\": {\"version\": \"1.11.0\"}, \"staticweb\": {}, \"tempurl\": {}} ``` This would signify to the client that swift version 1.11.0 is running and that staticweb and tempurl are available in this installation. There may be administrator-only information available via /info. To retrieve it, one must use an HMAC-signed request, similar to TempURL. The signature may be produced like so: ``` swift tempurl GET 3600 /info secret 2>/dev/null | sed s/temp_url/swiftinfo/g ``` Domain Remap Middleware Middleware that translates container and account parts of a domain to path parameters that the proxy server understands. Translation is only performed when the request URLs host domain matches one of a list of domains. This list may be configured by the option storage_domain, and defaults to the single domain example.com. If not already present, a configurable path_root, which defaults to v1, will be added to the start of the translated path. For example, with the default configuration: ``` container.AUTH-account.example.com/object container.AUTH-account.example.com/v1/object ``` would both be translated to: ``` container.AUTH-account.example.com/v1/AUTH_account/container/object ``` and: ``` AUTH-account.example.com/container/object AUTH-account.example.com/v1/container/object ``` would both be translated to: ``` AUTH-account.example.com/v1/AUTH_account/container/object ``` Additionally, translation is only performed when the account name in the translated path starts with a reseller prefix matching one of a list configured by the option reseller_prefixes, or when no match is found but a defaultresellerprefix has been configured. The reseller_prefixes list defaults to the single prefix AUTH. The defaultresellerprefix is not configured by default. Browsers can convert a host header to lowercase, so the middleware checks that the reseller prefix on the account name is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. The middleware will also replace any hyphen (-) in the account name with an underscore (_). For example, with the default configuration: ``` auth-account.example.com/container/object AUTH-account.example.com/container/object auth_account.example.com/container/object AUTH_account.example.com/container/object ``` would all be translated to: ``` <unchanged>.example.com/v1/AUTH_account/container/object ``` When no match is found in reseller_prefixes, the defaultresellerprefix config option is used. When no defaultresellerprefix is configured, any request with an account prefix not in the reseller_prefixes list will be ignored by this middleware. For example, with defaultresellerprefix = AUTH: ``` account.example.com/container/object ``` would be translated to: ``` account.example.com/v1/AUTH_account/container/object ``` Note that this middleware requires that container names and account names (except as described above) must be DNS-compatible. This means that the account name created in the system and the containers created by users cannot exceed 63 characters or have UTF-8 characters. These are restrictions over and above what Swift requires and are not explicitly checked. Simply put, this middleware will do a best-effort attempt to derive account and container names from elements in the domain name and put those derived values into the URL path (leaving the Host header unchanged). Also note that using Container to Container Synchronization with remapped domain names is not advised. With Container to Container Synchronization, you should use the true storage end points as sync destinations. Bases: object Domain Remap Middleware See above for a full description. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. DLO support centers around a user specified filter that matches segments and concatenates them together in object listing order. Please see the DLO docs for Dynamic Large Objects further" }, { "data": "Encryption middleware should be deployed in conjunction with the Keymaster middleware. Implements middleware for object encryption which comprises an instance of a Decrypter combined with an instance of an Encrypter. Provides a factory function for loading encryption middleware. Bases: object File-like object to be swapped in for wsgi.input. Bases: object Middleware for encrypting data and user metadata. By default all PUT or POSTed object data and/or metadata will be encrypted. Encryption of new data and/or metadata may be disabled by setting the disable_encryption option to True. However, this middleware should remain in the pipeline in order for existing encrypted data to be read. Bases: CryptoWSGIContext Encrypt user-metadata header values. Replace each x-object-meta-<key> user metadata header with a corresponding x-object-transient-sysmeta-crypto-meta-<key> header which has the crypto metadata required to decrypt appended to the encrypted value. req a swob Request keys a dict of encryption keys Encrypt the new object headers with a new iv and the current crypto. Note that an object may have encrypted headers while the body may remain unencrypted. Encrypt a header value using the supplied key. crypto a Crypto instance value value to encrypt key crypto key to use a tuple of (encrypted value, cryptometa) where cryptometa is a dict of form returned by getcryptometa() ValueError if value is empty Bases: CryptoWSGIContext Base64-decode and decrypt a value using the crypto_meta provided. value a base64-encoded value to decrypt key crypto key to use crypto_meta a crypto-meta dict of form returned by getcryptometa() decoder function to turn the decrypted bytes into useful data decrypted value Base64-decode and decrypt a value if crypto meta can be extracted from the value itself, otherwise return the value unmodified. A value should either be a string that does not contain the ; character or should be of the form: ``` <base64-encoded ciphertext>;swift_meta=<crypto meta> ``` value value to decrypt key crypto key to use required if True then the value is required to be decrypted and an EncryptionException will be raised if the header cannot be decrypted due to missing crypto meta. decoder function to turn the decrypted bytes into useful data decrypted value if crypto meta is found, otherwise the unmodified value EncryptionException if an error occurs while parsing crypto meta or if the header value was required to be decrypted but crypto meta was not found. Extract a crypto_meta dict from a header. headername name of header that may have cryptometa check if True validate the crypto meta A dict containing crypto_meta items EncryptionException if an error occurs while parsing the crypto meta Determine if a response should be decrypted, and if so then fetch keys. req a Request object crypto_meta a dict of crypto metadata a dict of decryption keys Get a wrapped key from crypto-meta and unwrap it using the provided wrapping key. crypto_meta a dict of crypto-meta wrapping_key key to be used to decrypt the wrapped key an unwrapped key HTTPInternalServerError if the crypto-meta has no wrapped key or the unwrapped key is invalid Bases: object Middleware for decrypting data and user metadata. Bases: BaseDecrypterContext Parses json body listing and decrypt encrypted entries. Updates Content-Length header with new body length and return a body iter. Bases: BaseDecrypterContext Find encrypted headers and replace with the decrypted versions. put_keys a dict of decryption keys used for object PUT. post_keys a dict of decryption keys used for object POST. A list of headers with any encrypted headers replaced by their decrypted values. HTTPInternalServerError if any error occurs while decrypting headers Decrypts a multipart mime doc response" }, { "data": "resp application response boundary multipart boundary string body_key decryption key for the response body cryptometa cryptometa for the response body generator for decrypted response body Decrypts a response body. resp application response body_key decryption key for the response body cryptometa cryptometa for the response body offset offset into object content at which response body starts generator for decrypted response body This middleware fix the Etag header of responses so that it is RFC compliant. RFC 7232 specifies that the value of the Etag header must be double quoted. It must be placed at the beggining of the pipeline, right after cache: ``` [pipeline:main] pipeline = ... cache etag-quoter ... [filter:etag-quoter] use = egg:swift#etag_quoter ``` Set X-Account-Rfc-Compliant-Etags: true at the account level to have any Etags in object responses be double quoted, as in \"d41d8cd98f00b204e9800998ecf8427e\". Alternatively, you may only fix Etags in a single container by setting X-Container-Rfc-Compliant-Etags: true on the container. This may be necessary for Swift to work properly with some CDNs. Either option may also be explicitly disabled, so you may enable quoted Etags account-wide as above but turn them off for individual containers with X-Container-Rfc-Compliant-Etags: false. This may be useful if some subset of applications expect Etags to be bare MD5s. FormPost Middleware Translates a browser form post into a regular Swift object PUT. The format of the form is: ``` <form action=\"<swift-url>\" method=\"POST\" enctype=\"multipart/form-data\"> <input type=\"hidden\" name=\"redirect\" value=\"<redirect-url>\" /> <input type=\"hidden\" name=\"maxfilesize\" value=\"<bytes>\" /> <input type=\"hidden\" name=\"maxfilecount\" value=\"<count>\" /> <input type=\"hidden\" name=\"expires\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"signature\" value=\"<hmac>\" /> <input type=\"file\" name=\"file1\" /><br /> <input type=\"submit\" /> </form> ``` Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: ``` <input type=\"hidden\" name=\"xdeleteat\" value=\"<unix-timestamp>\" /> <input type=\"hidden\" name=\"xdeleteafter\" value=\"<seconds>\" /> ``` If you want to specify the content type or content encoding of the files you can set content-encoding or content-type by adding them to the form input: ``` <input type=\"hidden\" name=\"content-type\" value=\"text/html\" /> <input type=\"hidden\" name=\"content-encoding\" value=\"gzip\" /> ``` The above example applies these parameters to all uploaded files. You can also set the content-type and content-encoding on a per-file basis by adding the parameters to each part of the upload. The <swift-url> is the URL of the Swift destination, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` The name of each file uploaded will be appended to the <swift-url> given. So, you can upload directly to the root of container with a url like: ``` https://swift-cluster.example.com/v1/AUTH_account/container/ ``` Optionally, you can include an object prefix to better separate different users uploads, such as: ``` https://swift-cluster.example.com/v1/AUTHaccount/container/objectprefix ``` Note the form method must be POST and the enctype must be set as multipart/form-data. The redirect attribute is the URL to redirect the browser to after the upload completes. This is an optional parameter. If you are uploading the form via an XMLHttpRequest the redirect should not be included. The URL will have status and message query parameters added to it, indicating the HTTP status code for the upload (2xx is success) and a possible message for further information if there was an error (such as maxfilesize exceeded). The maxfilesize attribute must be included and indicates the largest single file upload that can be done, in bytes. The maxfilecount attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <input type=\"file\" name=\"filexx\" /> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC signature of the" }, { "data": "Here is sample code for computing the signature: ``` import hmac from hashlib import sha512 from time import time path = '/v1/account/container/object_prefix' redirect = 'https://srv.com/some-page' # set to '' if redirect not in form maxfilesize = 104857600 maxfilecount = 10 expires = int(time() + 600) key = 'mykey' hmac_body = '%s\\n%s\\n%s\\n%s\\n%s' % (path, redirect, maxfilesize, maxfilecount, expires) signature = hmac.new(key, hmac_body, sha512).hexdigest() ``` The key is the value of either the account (X-Account-Meta-Temp-URL-Key, X-Account-Meta-Temp-Url-Key-2) or the container (X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys. Be certain to use the full path, from the /v1/ onward. Note that xdeleteat and xdeleteafter are not used in signature generation as they are both optional attributes. The command line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. Also note that the file attributes must be after the other attributes in order to be processed correctly. If attributes come after the file, they wont be sent with the subrequest (there is no way to parse all the attributes on the server-side without reading the whole thing into memory to service many requests, some with large files, there just isnt enough memory on the server, so attributes following the file are simply ignored). Bases: object FormPost Middleware See above for a full description. The proxy logs created for any subrequests made will have swift.source set to FP. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. The maximum size of any attributes value. Any additional data will be truncated. The size of data to read from the form at any given time. Returns the WSGI filter for use with paste.deploy. The gatekeeper middleware imposes restrictions on the headers that may be included with requests and responses. Request headers are filtered to remove headers that should never be generated by a client. Similarly, response headers are filtered to remove private headers that should never be passed to a client. The gatekeeper middleware must always be present in the proxy server wsgi pipeline. It should be configured close to the start of the pipeline specified in /etc/swift/proxy-server.conf, immediately after catch_errors and before any other middleware. It is essential that it is configured ahead of all middlewares using system metadata in order that they function correctly. If gatekeeper middleware is not configured in the pipeline then it will be automatically inserted close to the start of the pipeline by the proxy server. A list of python regular expressions that will be used to match against outbound response headers. Matching headers will be removed from the response. Bases: object Healthcheck middleware used for monitoring. If the path is /healthcheck, it will respond 200 with OK as the body. If the optional config parameter disable_path is set, and a file is present at that path, it will respond 503 with DISABLED BY FILE as the body. Returns a 503 response with DISABLED BY FILE in the body. Returns a 200 response with OK in the body. Keymaster middleware should be deployed in conjunction with the Encryption middleware. Bases: object Base middleware for providing encryption keys. This provides some basic helpers for: loading from a separate config path, deriving keys based on path, and installing a swift.callback.fetchcryptokeys hook in the request environment. Subclasses should define logroute, keymasteropts, and keymasterconfsection attributes, and implement the getroot_secret function. Creates an encryption key that is unique for the given path. path the (WSGI string) path of the resource being" }, { "data": "secret_id the id of the root secret from which the key should be derived. an encryption key. UnknownSecretIdError if the secret_id is not recognised. Bases: BaseKeyMaster Middleware for providing encryption keys. The middleware requires its encryption root secret to be set. This is the root secret from which encryption keys are derived. This must be set before first use to a value that is at least 256 bits. The security of all encrypted data critically depends on this key, therefore it should be set to a high-entropy value. For example, a suitable value may be obtained by generating a 32 byte (or longer) value using a cryptographically secure random number generator. Changing the root secret is likely to result in data loss. Bases: WSGIContext The simple scheme for key derivation is as follows: every path is associated with a key, where the key is derived from the path itself in a deterministic fashion such that the key does not need to be stored. Specifically, the key for any path is an HMAC of a root key and the path itself, calculated using an SHA256 hash function: ``` <pathkey> = HMACSHA256(<root_secret>, <path>) ``` Setup container and object keys based on the request path. Keys are derived from request path. The id entry in the results dict includes the part of the path used to derive keys. Other keymaster implementations may use a different strategy to generate keys and may include a different type of id, so callers should treat the id as opaque keymaster-specific data. key_id if given this should be a dict with the items included under the id key of a dict returned by this method. A dict containing encryption keys for object and container, and entries id and allids. The allids entry is a list of key id dicts for all root secret ids including the one used to generate the returned keys. Bases: object Swift middleware to Keystone authorization system. In Swifts proxy-server.conf add this keystoneauth middleware and the authtoken middleware to your pipeline. Make sure you have the authtoken middleware before the keystoneauth middleware. The authtoken middleware will take care of validating the user and keystoneauth will authorize access. The sample proxy-server.conf shows a sample pipeline that uses keystone. proxy-server.conf-sample The authtoken middleware is shipped with keystonemiddleware - it does not have any other dependencies than itself so you can either install it by copying the file directly in your python path or by installing keystonemiddleware. If support is required for unvalidated users (as with anonymous access) or for formpost/staticweb/tempurl middleware, authtoken will need to be configured with delayauthdecision set to true. See the Keystone documentation for more detail on how to configure the authtoken middleware. In proxy-server.conf you will need to have the setting account auto creation to true: ``` [app:proxy-server] account_autocreate = true ``` And add a swift authorization filter section, such as: ``` [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, swiftoperator ``` The user who is able to give ACL / create Containers permissions will be the user with a role listed in the operator_roles setting which by default includes the admin and the swiftoperator roles. The keystoneauth middleware maps a Keystone project/tenant to an account in Swift by adding a prefix (AUTH_ by default) to the tenant/project id.. For example, if the project id is 1234, the path is" }, { "data": "If you need to have a different reseller_prefix to be able to mix different auth servers you can configure the option reseller_prefix in your keystoneauth entry like this: ``` reseller_prefix = NEWAUTH ``` Dont forget to also update the Keystone service endpoint configuration to use NEWAUTH in the path. It is possible to have several accounts associated with the same project. This is done by listing several prefixes as shown in the following example: ``` reseller_prefix = AUTH, SERVICE ``` This means that for project id 1234, the paths /v1/AUTH_1234 and /v1/SERVICE_1234 are associated with the project and are authorized using roles that a user has with that project. The core use of this feature is that it is possible to provide different rules for each account prefix. The following parameters may be prefixed with the appropriate prefix: ``` operator_roles service_roles ``` For backward compatibility, if either of these parameters is specified without a prefix then it applies to all reseller_prefixes. Here is an example, using two prefixes: ``` reseller_prefix = AUTH, SERVICE operator_roles = admin, swiftoperator AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, someotherrole ``` X-Service-Token tokens are supported by the inclusion of the service_roles configuration option. When present, this option requires that the X-Service-Token header supply a token from a user who has a role listed in service_roles. Here is an example configuration: ``` reseller_prefix = AUTH, SERVICE AUTHoperatorroles = admin, swiftoperator SERVICEoperatorroles = admin, swiftoperator SERVICEserviceroles = service ``` The keystoneauth middleware supports cross-tenant access control using the syntax <tenant>:<user> to specify a grantee in container Access Control Lists (ACLs). For a request to be granted by an ACL, the grantee <tenant> must match the UUID of the tenant to which the request X-Auth-Token is scoped and the grantee <user> must match the UUID of the user authenticated by that token. Note that names must no longer be used in cross-tenant ACLs because with the introduction of domains in keystone names are no longer globally unique. For backwards compatibility, ACLs using names will be granted by keystoneauth when it can be established that the grantee tenant, the grantee user and the tenant being accessed are either not yet in a domain (e.g. the X-Auth-Token has been obtained via the keystone v2 API) or are all in the default domain to which legacy accounts would have been migrated. The default domain is identified by its UUID, which by default has the value default. This can be changed by setting the defaultdomainid option in the keystoneauth configuration: ``` defaultdomainid = default ``` The backwards compatible behavior can be disabled by setting the config option allownamesin_acls to false: ``` allownamesin_acls = false ``` To enable this backwards compatibility, keystoneauth will attempt to determine the domain id of a tenant when any new account is created, and persist this as account metadata. If an account is created for a tenant using a token with reselleradmin role that is not scoped on that tenant, keystoneauth is unable to determine the domain id of the tenant; keystoneauth will assume that the tenant may not be in the default domain and therefore not match names in ACLs for that account. By default, middleware higher in the WSGI pipeline may override auth processing, useful for middleware such as tempurl and formpost. If you know youre not going to use such middleware and you want a bit of extra security you can disable this behaviour by setting the allow_overrides option to false: ``` allow_overrides = false ``` app The next WSGI app in the pipeline conf The dict of configuration values Authorize an anonymous request. None if authorization is granted, an error page otherwise. Deny WSGI" }, { "data": "Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Returns a WSGI filter app for use with paste.deploy. List endpoints for an object, account or container. This middleware makes it possible to integrate swift with software that relies on data locality information to avoid network overhead, such as Hadoop. Using the original API, answers requests of the form: ``` /endpoints/{account}/{container}/{object} /endpoints/{account}/{container} /endpoints/{account} /endpoints/v1/{account}/{container}/{object} /endpoints/v1/{account}/{container} /endpoints/v1/{account} ``` with a JSON-encoded list of endpoints of the form: ``` http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj} http://{server}:{port}/{dev}/{part}/{acc}/{cont} http://{server}:{port}/{dev}/{part}/{acc} ``` correspondingly, e.g.: ``` http://10.1.1.1:6200/sda1/2/a/c2/o1 http://10.1.1.1:6200/sda1/2/a/c2 http://10.1.1.1:6200/sda1/2/a ``` Using the v2 API, answers requests of the form: ``` /endpoints/v2/{account}/{container}/{object} /endpoints/v2/{account}/{container} /endpoints/v2/{account} ``` with a JSON-encoded dictionary containing a key endpoints that maps to a list of endpoints having the same form as described above, and a key headers that maps to a dictionary of headers that should be sent with a request made to the endpoints, e.g.: ``` { \"endpoints\": {\"http://10.1.1.1:6210/sda1/2/a/c3/o1\", \"http://10.1.1.1:6230/sda3/2/a/c3/o1\", \"http://10.1.1.1:6240/sda4/2/a/c3/o1\"}, \"headers\": {\"X-Backend-Storage-Policy-Index\": \"1\"}} ``` In this example, the headers dictionary indicates that requests to the endpoint URLs should include the header X-Backend-Storage-Policy-Index: 1 because the objects container is using storage policy index 1. The /endpoints/ path is customizable (listendpointspath configuration parameter). Intended for consumption by third-party services living inside the cluster (as the endpoints make sense only inside the cluster behind the firewall); potentially written in a different language. This is why its provided as a REST API and not just a Python API: to avoid requiring clients to write their own ring parsers in their languages, and to avoid the necessity to distribute the ring file to clients and keep it up-to-date. Note that the call is not authenticated, which means that a proxy with this middleware enabled should not be open to an untrusted environment (everyone can query the locality data using this middleware). Bases: object List endpoints for an object, account or container. See above for a full description. Uses configuration parameter swift_dir (default /etc/swift). app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. Get the ring object to use to handle a request based on its policy. policy index as defined in swift.conf appropriate ring object Bases: object Caching middleware that manages caching in swift. Created on February 27, 2012 A filter that disallows any paths that contain defined forbidden characters or that exceed a defined length. Place early in the proxy-server pipeline after the left-most occurrence of the proxy-logging middleware (if present) and before the final proxy-logging middleware (if present) or the proxy-serer app itself, e.g.: ``` [pipeline:main] pipeline = catcherrors healthcheck proxy-logging namecheck cache ratelimit tempauth sos proxy-logging proxy-server [filter:name_check] use = egg:swift#name_check forbidden_chars = '\"`<> maximum_length = 255 ``` There are default settings for forbiddenchars (FORBIDDENCHARS) and maximumlength (MAXLENGTH) The filter returns HTTPBadRequest if path is invalid. @author: eamonn-otoole Object versioning in Swift has 3 different modes. There are two legacy modes that have similar API with a slight difference in behavior and this middleware introduces a new mode with a completely redesigned API and implementation. In terms of the implementation, this middleware relies heavily on the use of static links to reduce the amount of backend data movement that was part of the two legacy modes. It also introduces a new API for enabling the feature and to interact with older versions of an object. This new mode is not backwards compatible or interchangeable with the two legacy" }, { "data": "This means that existing containers that are being versioned by the two legacy modes cannot enable the new mode. The new mode can only be enabled on a new container or a container without either X-Versions-Location or X-History-Location header set. Attempting to enable the new mode on a container with either header will result in a 400 Bad Request response. After the introduction of this feature containers in a Swift cluster will be in one of either 3 possible states: 1. Object versioning never enabled, Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning has been enabled on a container, it will always have a flag stating whether it is either enabled or disabled. Clients enable object versioning on a container by performing either a PUT or POST request with the header X-Versions-Enabled: true. Upon enabling the versioning for the first time, the middleware will create a hidden container where object versions are stored. This hidden container will inherit the same Storage Policy as its parent container. To disable, clients send a POST request with the header X-Versions-Enabled: false. When versioning is disabled, the old versions remain unchanged. To delete a versioned container, versioning must be disabled and all versions of all objects must be deleted before the container can be deleted. At such time, the hidden container will also be deleted. When data is PUT into a versioned container (a container with the versioning flag enabled), the actual object is written to a hidden container and a symlink object is written to the parent container. Every object is assigned a version id. This id can be retrieved from the X-Object-Version-Id header in the PUT response. Note When object versioning is disabled on a container, new data will no longer be versioned, but older versions remain untouched. Any new data PUT will result in a object with a null version-id. The versioning API can be used to both list and operate on previous versions even while versioning is disabled. If versioning is re-enabled and an overwrite occurs on a null id object. The object will be versioned off with a regular version-id. A GET to a versioned object will return the current version of the object. The X-Object-Version-Id header is also returned in the response. A POST to a versioned object will update the most current object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. On DELETE, the middleware will write a zero-byte delete marker object version that notes when the delete took place. The symlink object will also be deleted from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the previous versions content will still be recoverable. Clients can now operate on previous versions of an object using this new versioning API. First to list previous versions, issue a a GET request to the versioned container with query parameter: ``` ?versions ``` To list a container with a large number of object versions, clients can also use the version_marker parameter together with the marker parameter. While the marker parameter is used to specify an object name the version_marker will be used specify the version id. All other pagination parameters can be used in conjunction with the versions parameter. During container listings, delete markers can be identified with the content-type application/x-deleted;swiftversionsdeleted=1. The most current version of an object can be identified by the field" }, { "data": "To operate on previous versions, clients can use the query parameter: ``` ?version-id=<id> ``` where the <id> is the value from the X-Object-Version-Id header. Only COPY, HEAD, GET and DELETE operations can be performed on previous versions. Either a PUT or POST request with a version-id parameter will result in a 400 Bad Request response. A HEAD/GET request to a delete-marker will result in a 404 Not Found response. When issuing DELETE requests with a version-id parameter, delete markers are not written down. A DELETE request with a version-id parameter to the current object will result in a both the symlink and the backing data being deleted. A DELETE to any other version will result in that version only be deleted and no changes made to the symlink pointing to the current version. To enable this new mode in a Swift cluster the versioned_writes and symlink middlewares must be added to the proxy pipeline, you must also set the option allowobjectversioning to True. Bases: ObjectVersioningContext Bases: object Counts bytes read from file_like so we know how big the object is that the client just PUT. This is particularly important when the client sends a chunk-encoded body, so we dont have a Content-Length header available. Bases: ObjectVersioningContext Handle request to delete a users container. As part of deleting a container, this middleware will also delete the hidden container holding object versions. Before a users container can be deleted, swift must check if there are still old object versions from that container. Only after disabling versioning and deleting all object versions can a container be deleted. Handle request for container resource. On PUT, POST set version location and enabled flag sysmeta. For container listings of a versioned container, update the objects bytes and etag to use the targets instead of using the symlink info. Bases: ObjectVersioningContext Handle DELETE requests. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a POST request to an object in a versioned container. If the response is a 307 because the POST went to a symlink, follow the symlink and send the request to the versioned object req original request. versions_cont container where previous versions of the object are stored. account account name. Check if the current version of the object is a versions-symlink if not, its because this object was added to the container when versioning was not enabled. Well need to copy it into the versions containers now that versioning is enabled. Also, put the new data from the client into the versions container and add a static symlink in the versioned container. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Handle a PUT?version-id request and create/update the is_latest link to point to the specific version. Expects a valid version id. Handle version-id request for object resource. When a request contains a version-id=<id> parameter, the request is acted upon the actual version of that object. Version-aware operations require that the container is versioned, but do not require that the versioning is currently enabled. Users should be able to operate on older versions of an object even if versioning is currently suspended. PUT and POST requests are not allowed as that would overwrite the contents of the versioned" }, { "data": "req The original request versions_cont container holding versions of the requested obj api_version should be v1 unless swift bumps api version account account name string container container name string object object name string is_enabled is versioning currently enabled version version of the object to act on Bases: WSGIContext Logging middleware for the Swift proxy. This serves as both the default logging implementation and an example of how to plug in your own logging format/method. The logging format implemented below is as follows: ``` clientip remoteaddr end_time.datetime method path protocol statusint referer useragent authtoken bytesrecvd bytes_sent clientetag transactionid headers requesttime source loginfo starttime endtime policy_index ``` These values are space-separated, and each is url-encoded, so that they can be separated with a simple .split(). remoteaddr is the contents of the REMOTEADDR environment variable, while client_ip is swifts best guess at the end-user IP, extracted variously from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR environment variable. status_int is the integer part of the status string passed to this middlewares start_response function, unless the WSGI environment has an item with key swift.proxyloggingstatus, in which case the value of that item is used. Other middlewares may set swift.proxyloggingstatus to override the logging of status_int. In either case, the logged status_int value is forced to 499 if a client disconnect is detected while this middleware is handling a request, or 500 if an exception is caught while handling a request. source (swift.source in the WSGI environment) indicates the code that generated the request, such as most middleware. (See below for more detail.) loginfo (swift.loginfo in the WSGI environment) is for additional information that could prove quite useful, such as any x-delete-at value or other behind the scenes activity that might not otherwise be detectable from the plain log information. Code that wishes to add additional log information should use code like env.setdefault('swift.loginfo', []).append(yourinfo) so as to not disturb others log information. Values that are missing (e.g. due to a header not being present) or zero are generally represented by a single hyphen (-). Note The message format may be configured using the logmsgtemplate option, allowing fields to be added, removed, re-ordered, and even anonymized. For more information, see https://docs.openstack.org/swift/latest/logs.html The proxy-logging can be used twice in the proxy servers pipeline when there is middleware installed that can return custom responses that dont follow the standard pipeline to the proxy server. For example, with staticweb, the middleware might intercept a request to /v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve /v1/AUTH_acc/cont/index.html and, in effect, respond to the clients original request using the 2nd requests body. In this instance the subrequest will be logged by the rightmost middleware (with a swift.source set) and the outgoing request (with body overridden) will be logged by leftmost middleware. Requests that follow the normal pipeline (use the same wsgi environment throughout) will not be double logged because an environment variable (swift.proxyaccesslog_made) is checked/set when a log is made. All middleware making subrequests should take care to set swift.source when needed. With the doubled proxy logs, any consumer/processor of swifts proxy logs should look at the swift.source field, the rightmost log value, to decide if this is a middleware subrequest or not. A log processor calculating bandwidth usage will want to only sum up logs with no swift.source. Bases: object Middleware that logs Swift proxy requests in the swift log format. Log a request. req" }, { "data": "object for the request status_int integer code for the response status bytes_received bytes successfully read from the request body bytes_sent bytes yielded to the WSGI server start_time timestamp request started end_time timestamp request completed resp_headers dict of the response headers ttfb time to first byte wirestatusint the on the wire status int Bases: Exception Bases: object Rate limiting middleware Rate limits requests on both an Account and Container level. Limits are configurable. Returns a list of key (used in memcache), ratelimit tuples. Keys should be checked in order. req swob request account_name account name from path container_name container name from path obj_name object name from path global_ratelimit this account has an account wide ratelimit on all writes combined Performs rate limiting and account white/black listing. Sleeps if necessary. If self.memcache_client is not set, immediately returns None. account_name account name from path container_name container name from path obj_name object name from path paste.deploy app factory for creating WSGI proxy apps. Returns number of requests allowed per second for given size. Parses general parms for rate limits looking for things that start with the provided name_prefix within the provided conf and returns lists for both internal use and for /info conf conf dict to parse name_prefix prefix of config parms to look for info set to return extra stuff for /info registration Bases: object Middleware that make an entire cluster or individual accounts read only. Check whether an account should be read-only. This considers both the cluster-wide config value as well as the per-account override in X-Account-Sysmeta-Read-Only. paste.deploy app factory for creating WSGI proxy apps. Bases: object Recon middleware used for monitoring. /recon/load|mem|async will return various system metrics. Needs to be added to the pipeline and requires a filter declaration in the [account|container|object]-server conf file: [filter:recon] use = egg:swift#recon reconcachepath = /var/cache/swift get # of async pendings get auditor info get devices get disk utilization statistics get # of drive audit errors get expirer info get info from /proc/loadavg get info from /proc/meminfo get ALL mounted fs from /proc/mounts get obj/container/account quarantine counts get reconstruction info get relinker info, if any get replication info get all ring md5sums get sharding info get info from /proc/net/sockstat and sockstat6 Note: The mem value is actually kernel pages, but we return bytes allocated based on the systems page size. get md5 of swift.conf get current time list unmounted (failed?) devices get updater info get swift version Server side copy is a feature that enables users/clients to COPY objects between accounts and containers without the need to download and then re-upload objects, thus eliminating additional bandwidth consumption and also saving time. This may be used when renaming/moving an object which in Swift is a (COPY + DELETE) operation. The server side copy middleware should be inserted in the pipeline after auth and before the quotas and large object middlewares. If it is not present in the pipeline in the proxy-server configuration file, it will be inserted automatically. There is no configurable option provided to turn off server side copy. All metadata of source object is preserved during object copy. One can also provide additional metadata during PUT/COPY request. This will over-write any existing conflicting keys. Server side copy can also be used to change content-type of an existing object. The destination container must exist before requesting copy of the object. When several replicas exist, the system copies from the most recent replica. That is, the copy operation behaves as though the X-Newest header is in the request. The request to copy an object should have no body (i.e. content-length of the request must be" }, { "data": "There are two ways in which an object can be copied: Send a PUT request to the new object (destination/target) with an additional header named X-Copy-From specifying the source object (in /container/object format). Example: ``` curl -i -X PUT http://<storageurl>/container1/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container2/source_obj' -H 'Content-Length: 0' ``` Send a COPY request with an existing object in URL with an additional header named Destination specifying the destination/target object (in /container/object format). Example: ``` curl -i -X COPY http://<storageurl>/container2/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container1/destination_obj' -H 'Content-Length: 0' ``` Note that if the incoming request has some conditional headers (e.g. Range, If-Match), the source object will be evaluated for these headers (i.e. if PUT with both X-Copy-From and Range, Swift will make a partial copy to the destination object). Objects can also be copied from one account to another account if the user has the necessary permissions (i.e. permission to read from container in source account and permission to write to container in destination account). Similar to examples mentioned above, there are two ways to copy objects across accounts: Like the example above, send PUT request to copy object but with an additional header named X-Copy-From-Account specifying the source account. Example: ``` curl -i -X PUT http://<host>:<port>/v1/AUTHtest1/container/destinationobj -H 'X-Auth-Token: <token>' -H 'X-Copy-From: /container/source_obj' -H 'X-Copy-From-Account: AUTH_test2' -H 'Content-Length: 0' ``` Like the previous example, send a COPY request but with an additional header named Destination-Account specifying the name of destination account. Example: ``` curl -i -X COPY http://<host>:<port>/v1/AUTHtest2/container/sourceobj -H 'X-Auth-Token: <token>' -H 'Destination: /container/destination_obj' -H 'Destination-Account: AUTH_test1' -H 'Content-Length: 0' ``` The best option to copy a large object is to copy segments individually. To copy the manifest object of a large object, add the query parameter to the copy request: ``` ?multipart-manifest=get ``` If a request is sent without the query parameter, an attempt will be made to copy the whole object but will fail if the object size is greater than 5GB. Bases: WSGIContext Please see the SLO docs for Static Large Objects further details. This StaticWeb WSGI middleware will serve container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests. When using keystone for authentication set delayauthdecision = true in the authtoken middleware configuration in your /etc/swift/proxy-server.conf file. If you want to use it with authenticated requests, set the X-Web-Mode: true header on the request. The staticweb filter should be added to the pipeline in your /etc/swift/proxy-server.conf file just after any auth middleware. Also, the configuration section for the staticweb middleware itself needs to be added. For example: ``` [DEFAULT] ... [pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth staticweb proxy-logging proxy-server ... [filter:staticweb] use = egg:swift#staticweb ``` Any publicly readable containers (for example, X-Container-Read: .r:*, see ACLs for more information on this) will be checked for X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values: ``` X-Container-Meta-Web-Index <index.name> X-Container-Meta-Web-Error <error.name.suffix> ``` If X-Container-Meta-Web-Index is set, any <index.name> files will be served without having to specify the <index.name> part. For instance, setting X-Container-Meta-Web-Index: index.html will be able to serve the object /pseudo/path/index.html with just /pseudo/path or /pseudo/path/ If X-Container-Meta-Web-Error is set, any errors (currently just 401 Unauthorized and 404 Not Found) will instead serve the /<status.code><error.name.suffix> object. For instance, setting X-Container-Meta-Web-Error: error.html will serve /404error.html for requests for paths not found. For pseudo paths that have no <index.name>, this middleware can serve HTML file listings if you set the X-Container-Meta-Web-Listings: true metadata item on the" }, { "data": "Note that the listing must be authorized; you may want a container ACL like X-Container-Read: .r:*,.rlistings. If listings are enabled, the listings can have a custom style sheet by setting the X-Container-Meta-Web-Listings-CSS header. For instance, setting X-Container-Meta-Web-Listings-CSS: listing.css will make listings link to the /listing.css style sheet. If you view source in your browser on a listing page, you will see the well defined document structure that can be styled. Additionally, prefix-based TempURL parameters may be used to authorize requests instead of making the whole container publicly readable. This gives clients dynamic discoverability of the objects available within that prefix. Note tempurlprefix values should typically end with a slash (/) when used with StaticWeb. StaticWebs redirects will not carry over any TempURL parameters, as they likely indicate that the user created an overly-broad TempURL. By default, the listings will be rendered with a label of Listing of /v1/account/container/path. This can be altered by setting a X-Container-Meta-Web-Listings-Label: <label>. For example, if the label is set to example.com, a label of Listing of example.com/path will be used instead. The content-type of directory marker objects can be modified by setting the X-Container-Meta-Web-Directory-Type header. If the header is not set, application/directory is used by default. Directory marker objects are 0-byte objects that represent directories to create a simulated hierarchical structure. Example usage of this middleware via swift: Make the container publicly readable: ``` swift post -r '.r:*' container ``` You should be able to get objects directly, but no index.html resolution or listings. Set an index file directive: ``` swift post -m 'web-index:index.html' container ``` You should be able to hit paths that have an index.html without needing to type the index.html part. Turn on listings: ``` swift post -r '.r:*,.rlistings' container swift post -m 'web-listings: true' container ``` Now you should see object listings for paths and pseudo paths that have no index.html. Enable a custom listings style sheet: ``` swift post -m 'web-listings-css:listings.css' container ``` Set an error file: ``` swift post -m 'web-error:error.html' container ``` Now 401s should load 401error.html, 404s should load 404error.html, etc. Set Content-Type of directory marker object: ``` swift post -m 'web-directory-type:text/directory' container ``` Now 0-byte objects with a content-type of text/directory will be treated as directories rather than objects. Bases: object The Static Web WSGI middleware filter; serves container data as a static web site. See staticweb for an overview. The proxy logs created for any subrequests made will have swift.source set to SW. app The next WSGI application/filter in the paste.deploy pipeline. conf The filter configuration dict. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Only used in tests. Returns a Static Web WSGI filter for use with paste.deploy. Symlink Middleware Symlinks are objects stored in Swift that contain a reference to another object (hereinafter, this is called target object). They are analogous to symbolic links in Unix-like operating systems. The existence of a symlink object does not affect the target object in any way. An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Clients create a Swift symlink by performing a zero-length PUT request with the header X-Symlink-Target: <container>/<object>. For a cross-account symlink, the header X-Symlink-Target-Account: <account> must be included. If omitted, it is inserted automatically with the account of the symlink object in the PUT request process. Symlinks must be zero-byte objects. Attempting to PUT a symlink with a non-empty request body will result in a 400-series" }, { "data": "Also, POST with X-Symlink-Target header always results in a 400-series error. The target object need not exist at symlink creation time. Clients may optionally include a X-Symlink-Target-Etag: <etag> header during the PUT. If present, this will create a static symlink instead of a dynamic symlink. Static symlinks point to a specific object rather than a specific name. They do this by using the value set in their X-Symlink-Target-Etag header when created to verify it still matches the ETag of the object theyre pointing at on a GET. In contrast to a dynamic symlink the target object referenced in the X-Symlink-Target header must exist and its ETag must match the X-Symlink-Target-Etag or the symlink creation will return a client error. A GET/HEAD request to a symlink will result in a request to the target object referenced by the symlinks X-Symlink-Target-Account and X-Symlink-Target headers. The response of the GET/HEAD request will contain a Content-Location header with the path location of the target object. A GET/HEAD request to a symlink with the query parameter ?symlink=get will result in the request targeting the symlink itself. A symlink can point to another symlink. Chained symlinks will be traversed until the target is not a symlink. If the number of chained symlinks exceeds the limit symloop_max an error response will be produced. The value of symloop_max can be defined in the symlink config section of proxy-server.conf. If not specified, the default symloop_max value is 2. If a value less than 1 is specified, the default value will be used. If a static symlink (i.e. a symlink created with a X-Symlink-Target-Etag header) targets another static symlink, both of the X-Symlink-Target-Etag headers must match the target object for the GET to succeed. If a static symlink targets a dynamic symlink (i.e. a symlink created without a X-Symlink-Target-Etag header) then the X-Symlink-Target-Etag header of the static symlink must be the Etag of the zero-byte object. If a symlink with a X-Symlink-Target-Etag targets a large object manifest it must match the ETag of the manifest (e.g. the ETag as returned by multipart-manifest=get or value in the X-Manifest-Etag header). A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request to the target object. Therefore issuing a HEAD request to the symlink will return the target metadata, and issuing a GET request to the symlink will return the data and metadata of the target object. To return the symlink metadata (with its empty body) a GET/HEAD request with the ?symlink=get query parameter must be sent to a symlink object. A POST request to a symlink will result in a 307 Temporary Redirect response. The response will contain a Location header with the path of the target object as the value. The request is never redirected to the target object by Swift. Nevertheless, the metadata in the POST request will be applied to the symlink because object servers cannot know for sure if the current object is a symlink or not in eventual consistency. A symlinks Content-Type is completely independent from its target. As a convenience Swift will automatically set the Content-Type on a symlink PUT if not explicitly set by the client. If the client sends a X-Symlink-Target-Etag Swift will set the symlinks Content-Type to that of the target, otherwise it will be set to application/symlink. You can review a symlinks Content-Type using the ?symlink=get interface. You can change a symlinks Content-Type using a POST request. The symlinks Content-Type will appear in the container listing. A DELETE request to a symlink will delete the symlink" }, { "data": "The target object will not be deleted. A COPY request, or a PUT request with a X-Copy-From header, to a symlink will copy the target object. The same request to a symlink with the query parameter ?symlink=get will copy the symlink itself. An OPTIONS request to a symlink will respond with the options for the symlink only; the request will not be redirected to the target object. Please note that if the symlinks target object is in another container with CORS settings, the response will not reflect the settings. Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and will result in a 400-series error. The GET/HEAD tempurls honor the scope of the tempurl key. Container tempurl will only work on symlinks where the target container is the same as the symlink. In case a symlink targets an object in a different container, a GET/HEAD request will result in a 401 Unauthorized error. The account level tempurl will allow cross-container symlinks, but not cross-account symlinks. If a symlink object is overwritten while it is in a versioned container, the symlink object itself is versioned, not the referenced object. A GET request with query parameter ?format=json to a container which contains symlinks will respond with additional information symlink_path for each symlink object in the container listing. The symlink_path value is the target path of the symlink. Clients can differentiate symlinks and other objects by this function. Note that responses in any other format (e.g. ?format=xml) wont include symlink_path info. If a X-Symlink-Target-Etag header was included on the symlink, JSON container listings will include that value in a symlink_etag key and the target objects Content-Length will be included in the key symlink_bytes. If a static symlink targets a static large object manifest it will carry forward the SLOs size and slo_etag in the container listing using the symlinkbytes and sloetag keys. However, manifests created before swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate the extra SLO information to the listing. Clients may recreate the manifest (COPY w/ ?multipart-manfiest=get) before creating a static symlink to add the requisite metadata. Errors PUT with the header X-Symlink-Target with non-zero Content-Length will produce a 400 BadRequest error. POST with the header X-Symlink-Target will produce a 400 BadRequest error. GET/HEAD traversing more than symloop_max chained symlinks will produce a 409 Conflict error. PUT/GET/HEAD on a symlink that inclues a X-Symlink-Target-Etag header that does not match the target will poduce a 409 Conflict error. POSTs will produce a 307 Temporary Redirect error. Symlinks are enabled by adding the symlink middleware to the proxy server WSGI pipeline and including a corresponding filter configuration section in the proxy-server.conf file. The symlink middleware should be placed after slo, dlo and versioned_writes middleware, but before encryption middleware in the pipeline. See the proxy-server.conf-sample file for further details. Additional steps are required if the container sync feature is being used. Note Once you have deployed symlink middleware in your pipeline, you should neither remove the symlink middleware nor downgrade swift to a version earlier than symlinks being supported. Doing so may result in unexpected container listing results in addition to symlink objects behaving like a normal object. If container sync is being used then the symlink middleware must be added to the container sync internal client pipeline. The following configuration steps are required: Create a custom internal client configuration file for container sync (if one is not already in use) based on the sample file internal-client.conf-sample. For example, copy internal-client.conf-sample to" }, { "data": "Modify this file to include the symlink middleware in the pipeline in the same way as described above for the proxy server. Modify the container-sync section of all container server config files to point to this internal client config file using the internalclientconf_path option. For example: ``` internalclientconf_path = /etc/swift/container-sync-client.conf ``` Note These container sync configuration steps will be necessary for container sync probe tests to pass if the symlink middleware is included in the proxy pipeline of a test cluster. Bases: WSGIContext Handle container requests. req a Request startresponse startresponse function Response Iterator after start_response called. Bases: object Middleware that implements symlinks. Symlinks are objects stored in Swift that contain a reference to another object (i.e., the target object). An important use case is to use a path in one container to access an object in a different container, with a different policy. This allows policy cost/performance trade-offs to be made on individual objects. Bases: WSGIContext Handle get/head request and in case the response is a symlink, redirect request to target object. req HTTP GET or HEAD object request Response Iterator Handle get/head request when client sent parameter ?symlink=get req HTTP GET or HEAD object request with param ?symlink=get Response Iterator Handle object requests. req a Request startresponse startresponse function Response Iterator after start_response has been called Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect error message is returned to client. Clients that POST to symlinks should understand that the POST is not redirected to the target object like in a HEAD/GET request. POSTs to a symlink will be handled just like a normal object by the object server. It cannot reject it because it may not have symlink state when the POST lands. The object server has no knowledge of what is a symlink object is. On the other hand, on POST requests, the object server returns all sysmeta of the object. This method uses that sysmeta to determine if the stored object is a symlink or not. req HTTP POST object request HTTPTemporaryRedirect if POSTing to a symlink. Response Iterator Handle put request when it contains X-Symlink-Target header. Symlink headers are validated and moved to sysmeta namespace. :param req: HTTP PUT object request :returns: Response Iterator Helper function to translate from cluster-facing X-Object-Sysmeta-Symlink- headers to client-facing X-Symlink- headers. headers request headers dict. Note that the headers dict will be updated directly. Helper function to translate from client-facing X-Symlink-* headers to cluster-facing X-Object-Sysmeta-Symlink-* headers. headers request headers dict. Note that the headers dict will be updated directly. Test authentication and authorization system. Add to your pipeline in proxy-server.conf, such as: ``` [pipeline:main] pipeline = catch_errors cache tempauth proxy-server ``` Set account auto creation to true in proxy-server.conf: ``` [app:proxy-server] account_autocreate = true ``` And add a tempauth filter section, such as: ``` [filter:tempauth] use = egg:swift#tempauth useradminadmin = admin .admin .reseller_admin usertesttester = testing .admin usertest2tester2 = testing2 .admin usertesttester3 = testing3 user64dW5kZXJfc2NvcmUYV9i = testing4 ``` See the proxy-server.conf-sample for more information. All accounts/users are listed in the filter section. The format is: ``` user<account><user> = <key> [group] [group] [...] [storage_url] ``` If you want to be able to include underscores in the <account> or <user> portions, you can base64 encode them (with no equal signs) in a line like this: ``` user64<accountb64><userb64> = <key> [group] [...] [storage_url] ``` There are three special groups: .reseller_admin can do anything to any account for this auth .reseller_reader can GET/HEAD anything in any account for this auth" }, { "data": "can do anything within the account If none of these groups are specified, the user can only access containers that have been explicitly allowed for them by a .admin or .reseller_admin. The trailing optional storage_url allows you to specify an alternate URL to hand back to the user upon authentication. If not specified, this defaults to: ``` $HOST/v1/<resellerprefix><account> ``` Where $HOST will do its best to resolve to what the requester would need to use to reach this host, <reseller_prefix> is from this section, and <account> is from the user<account><user> name. Note that $HOST cannot possibly handle when you have a load balancer in front of it that does https while TempAuth itself runs with http; in such a case, youll have to specify the storageurlscheme configuration value as an override. The reseller prefix specifies which parts of the account namespace this middleware is responsible for managing authentication and authorization. By default, the prefix is AUTH so accounts and tokens are prefixed by AUTH. When a requests token and/or path start with AUTH, this middleware knows it is responsible. We allow the reseller prefix to be a list. In tempauth, the first item in the list is used as the prefix for tokens and user groups. The other prefixes provide alternate accounts that users can access. For example if the reseller prefix list is AUTH, OTHER, a user with admin access to AUTH_account also has admin access to OTHER_account. The group .admin is normally needed to access an account (ACLs provide an additional way to access an account). You can specify the require_group parameter. This means that you also need the named group to access an account. If you have several reseller prefix items, prefix the require_group parameter with the appropriate prefix. If an X-Service-Token is presented in the request headers, the groups derived from the token are appended to the roles derived from X-Auth-Token. If X-Auth-Token is missing or invalid, X-Service-Token is not processed. The X-Service-Token is useful when combined with multiple reseller prefix items. In the following configuration, accounts prefixed SERVICE_ are only accessible if X-Auth-Token is from the end-user and X-Service-Token is from the glance user: ``` [filter:tempauth] use = egg:swift#tempauth reseller_prefix = AUTH, SERVICE SERVICErequiregroup = .service useradminadmin = admin .admin .reseller_admin userjoeacctjoe = joepw .admin usermaryacctmary = marypw .admin userglanceglance = glancepw .service ``` The name .service is an example. Unlike .admin, .reseller_admin, .reseller_reader it is not a reserved name. Please note that ACLs can be set on service accounts and are matched against the identity validated by X-Auth-Token. As such ACLs can grant access to a service accounts container without needing to provide a service token, just like any other cross-reseller request using ACLs. If a swift_owner issues a POST or PUT to the account with the X-Account-Access-Control header set in the request, then this may allow certain types of access for additional users. Read-Only: Users with read-only access can list containers in the account, list objects in any container, retrieve objects, and view unprivileged account/container/object metadata. Read-Write: Users with read-write access can (in addition to the read-only privileges) create objects, overwrite existing objects, create new containers, and set unprivileged container/object metadata. Admin: Users with admin access are swift_owners and can perform any action, including viewing/setting privileged metadata (e.g. changing account ACLs). To generate headers for setting an account ACL: ``` from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headervalue = formatacl(version=2, acldict=acldata) ``` To generate a curl command line from the above: ``` token=... storage_url=... python -c ' from" }, { "data": "import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } headers = {'X-Account-Access-Control': formatacl(version=2, acldict=acl_data)} header_str = ' '.join([\"-H '%s: %s'\" % (k, v) for k, v in headers.items()]) print('curl -D- -X POST -H \"x-auth-token: $token\" %s ' '$storageurl' % headerstr) ' ``` Bases: object app The next WSGI app in the pipeline conf The dict of configuration values from the Paste config file Return a dict of ACL data from the account server via getaccountinfo. Auth systems may define their own format, serialization, structure, and capabilities implemented in the ACL headers and persisted in the sysmeta data. However, auth systems are strongly encouraged to be interoperable with Tempauth. X-Account-Access-Control swift.common.middleware.acl.parse_acl() swift.common.middleware.acl.format_acl() Returns None if the request is authorized to continue or a standard WSGI response callable if not. Returns a standard WSGI response callable with the status of 403 or 401 depending on whether the REMOTE_USER is set or not. Return a user-readable string indicating the errors in the input ACL, or None if there are no errors. Get groups for the given token. env The current WSGI environment dictionary. token Token to validate and return a group string for. None if the token is invalid or a string containing a comma separated list of groups the authenticated user is a member of. The first group in the list is also considered a unique identifier for that user. WSGI entry point for auth requests (ones that match the self.auth_prefix). Wraps env in swob.Request object and passes it down. env WSGI environment dictionary start_response WSGI callable Handles the various request for token and service end point(s) calls. There are various formats to support the various auth servers in the past. Examples: ``` GET <auth-prefix>/v1/<act>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/auth X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> GET <auth-prefix>/v1.0 X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> X-Auth-Key: <key> or X-Storage-Pass: <key> ``` On successful authentication, the response will have X-Auth-Token and X-Storage-Token set to the token to use with Swift and X-Storage-URL set to the URL to the default Swift cluster to use. req The swob.Request to process. swob.Response, 2xx on success with data set as explained above. Entry point for auth requests (ones that match the self.auth_prefix). Should return a WSGI-style callable (such as swob.Response). req swob.Request object Returns a WSGI filter app for use with paste.deploy. TempURL Middleware Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in Swift, but the Swift account has no public access. The website can generate a URL that will provide GET access for a limited time to the resource. When the web browser user clicks on the link, the browser will download the object directly from Swift, obviating the need for the website to act as a proxy for the request. If the user were to share the link with all his friends, or accidentally post it on a forum, etc. the direct access would be limited to the expiration time set when the website created the link. Beyond that, the middleware provides the ability to create URLs, which contain signatures which are valid for all objects which share a common prefix. These prefix-based URLs are useful for sharing a set of objects. Restrictions can also be placed on the ip that the resource is allowed to be accessed from. This can be useful for locking down where the urls can be used" }, { "data": "To create temporary URLs, first an X-Account-Meta-Temp-URL-Key header must be set on the Swift account. Then, an HMAC (RFC 2104) signature is generated using the HTTP method to allow (GET, PUT, DELETE, etc.), the Unix timestamp until which the access should be allowed, the full path to the object, and the key set on the account. The digest algorithm to be used may be configured by the operator. By default, HMAC-SHA256 and HMAC-SHA512 are supported. Check the tempurl.allowed_digests entry in the clusters capabilities response to see which algorithms are supported by your deployment; see Discoverability for more information. On older clusters, the tempurl key may be present while the allowed_digests subkey is not; in this case, only HMAC-SHA1 is supported. For example, here is code generating the signature for a GET for 60 seconds on /v1/AUTH_account/container/object: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = hmac.new(key, hmac_body, sha256).hexdigest() ``` Be certain to use the full path, from the /v1/ onward. Lets say sig ends up equaling 732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b and expires ends up 1512508563. Then, for example, the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563 ``` For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also supported, and indicated by prefixing the signature with \"<digest name>:\". This is required for HMAC-SHA512 signatures. For example, comparable code for generating a HMAC-SHA512 signature would be: ``` import base64 import hmac from hashlib import sha512 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' key = 'mykey' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new( key, hmac_body, sha512).digest()) ``` Supposing that sig ends up equaling sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm 5ErY2X11Yc1Yhy2OMCyN3yueeXg== and expires ends up 1516741234, then the website could provide a link to: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==& tempurlexpires=1516741234 ``` You may also use ISO 8601 UTC timestamps with the format \"%Y-%m-%dT%H:%M:%SZ\" instead of UNIX timestamps in the URL (but NOT in the code above for generating the signature!). So, the above HMAC-SHA246 URL could also be formulated as: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=2017-12-05T21:16:03Z ``` If a prefix-based signature with the prefix pre is desired, set path to: ``` path = 'prefix:/v1/AUTH_account/container/pre' ``` The generated signature would be valid for all objects starting with pre. The middleware detects a prefix-based temporary URL by a query parameter called tempurlprefix. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` Another valid URL: ``` https://swift-cluster.example.com/v1/AUTH_account/container/pre/ subfolder/another_object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563& tempurlprefix=pre ``` If you wish to lock down the ip ranges from where the resource can be accessed to the ip 1.2.3.4: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range = '1.2.3.4' key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` The generated signature would only be valid from the ip 1.2.3.4. The middleware detects an ip-based temporary URL by a query parameter called tempurlip_range. So, if sig and expires would end up like above, following URL would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c& tempurlexpires=1648082711& tempurlip_range=1.2.3.4 ``` Similarly to lock down the ip to a range of 1.2.3.X so starting from the ip 1.2.3.0 to 1.2.3.255: ``` import hmac from hashlib import sha256 from time import time method = 'GET' expires = int(time() + 60) path = '/v1/AUTH_account/container/object' ip_range =" }, { "data": "key = b'mykey' hmacbody = 'ip=%s\\n%s\\n%s\\n%s' % (iprange, method, expires, path) sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest() ``` Then the following url would be valid: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83& tempurlexpires=1648082711& tempurlip_range=1.2.3.0/24 ``` Any alteration of the resource path or query arguments of a temporary URL would result in 401 Unauthorized. Similarly, a PUT where GET was the allowed method would be rejected with 401 Unauthorized. However, HEAD is allowed if GET, PUT, or POST is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Swift. TempURL supports both account and container level keys. Each allows up to two keys to be set, allowing key rotation without invalidating all existing temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2, while container keys are specified by X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2. Signatures are checked against account and container keys, if present. With GET TempURLs, a Content-Disposition header will be set on the response so that browsers will interpret this as a file attachment to be saved. The filename chosen is based on the object name, but you can override this with a filename query parameter. Modifying the above example: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&filename=My+Test+File.pdf ``` If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline ``` In some cases, the client might not able to present the content of the object, but you still want the content able to save to local with the specific filename. So you can cause Content-Disposition: inline; filename=... to be set on the response by adding the inline&filename=... parameter to the query string, like so: ``` https://swift-cluster.example.com/v1/AUTH_account/container/object? tempurlsig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b& tempurlexpires=1512508563&inline&filename=My+Test+File.pdf ``` This middleware understands the following configuration settings: A whitespace-delimited list of the headers to remove from incoming requests. Names may optionally end with * to indicate a prefix match. incomingallowheaders is a list of exceptions to these removals. Default: x-timestamp x-open-expired A whitespace-delimited list of the headers allowed as exceptions to incomingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: None A whitespace-delimited list of the headers to remove from outgoing responses. Names may optionally end with * to indicate a prefix match. outgoingallowheaders is a list of exceptions to these removals. Default: x-object-meta-* A whitespace-delimited list of the headers allowed as exceptions to outgoingremoveheaders. Names may optionally end with * to indicate a prefix match. Default: x-object-meta-public-* A whitespace delimited list of request methods that are allowed to be used with a temporary URL. Default: GET HEAD PUT POST DELETE A whitespace delimited list of digest algorithms that are allowed to be used when calculating the signature for a temporary URL. Default: sha256 sha512 Default headers as exceptions to DEFAULTINCOMINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTINCOMINGALLOW_HEADERS is a list of exceptions to these removals. Default headers as exceptions to DEFAULTOUTGOINGREMOVE_HEADERS. Simply a whitespace delimited list of header names and names can optionally end with to indicate a prefix match. Default headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with * to indicate a prefix match. DEFAULTOUTGOINGALLOW_HEADERS is a list of exceptions to these" }, { "data": "Bases: object WSGI Middleware to grant temporary URLs specific access to Swift resources. See the overview for more information. The proxy logs created for any subrequests made will have swift.source set to TU. app The next WSGI filter or app in the paste.deploy chain. conf The configuration dict for the middleware. HTTP user agent to use for subrequests. The next WSGI application/filter in the paste.deploy pipeline. The filter configuration dict. Headers to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY. Header with match prefixes to allow in incoming requests. Uppercase WSGI env style, like HTTPXMATCHESREMOVEPREFIXBUTOKAY_*. Headers to remove from incoming requests. Uppercase WSGI env style, like HTTPXPRIVATE. Header with match prefixes to remove from incoming requests. Uppercase WSGI env style, like HTTPXSENSITIVE_*. Headers to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay. Header with match prefixes to allow in outgoing responses. Lowercase, like x-matches-remove-prefix-but-okay-*. Headers to remove from outgoing responses. Lowercase, like x-account-meta-temp-url-key. Header with match prefixes to remove from outgoing responses. Lowercase, like x-account-meta-private-*. Returns the WSGI filter for use with paste.deploy. Note This middleware supports two legacy modes of object versioning that is now replaced by a new mode. It is recommended to use the new Object Versioning mode for new containers. Object versioning in swift is implemented by setting a flag on the container to tell swift to version all objects in the container. The value of the flag is the URL-encoded container name where the versions are stored (commonly referred to as the archive container). The flag itself is one of two headers, which determines how object DELETE requests are handled: X-History-Location On DELETE, copy the current version of the object to the archive container, write a zero-byte delete marker object that notes when the delete took place, and delete the object from the versioned container. The object will no longer appear in container listings for the versioned container and future requests there will return 404 Not Found. However, the content will still be recoverable from the archive container. X-Versions-Location On DELETE, only remove the current version of the object. If any previous versions exist in the archive container, the most recent one is copied over the current version, and the copy in the archive container is deleted. As a result, if you have 5 total versions of the object, you must delete the object 5 times for that object name to start responding with 404 Not Found. Either header may be used for the various containers within an account, but only one may be set for any given container. Attempting to set both simulataneously will result in a 400 Bad Request response. Note It is recommended to use a different archive container for each container that is being versioned. Note Enabling versioning on an archive container is not recommended. When data is PUT into a versioned container (a container with the versioning flag turned on), the existing data in the file is redirected to a new object in the archive container and the data in the PUT request is saved as the data for the versioned object. The new object name (for the previous version) is <archivecontainer>/<length><objectname>/<timestamp>, where length is the 3-character zero-padded hexadecimal length of the <object_name> and <timestamp> is the timestamp of when the previous version was created. A GET to a versioned object will return the current version of the object without having to do any request redirects or metadata" }, { "data": "A POST to a versioned object will update the object metadata as normal, but will not create a new version of the object. In other words, new versions are only created when the content of the object changes. A DELETE to a versioned object will be handled in one of two ways, as described above. To restore a previous version of an object, find the desired version in the archive container then issue a COPY with a Destination header indicating the original location. This will archive the current version similar to a PUT over the versioned object. If the client additionally wishes to permanently delete what was the current version, it must find the newly-created archive in the archive container and issue a separate DELETE to it. This middleware was written as an effort to refactor parts of the proxy server, so this functionality was already available in previous releases and every attempt was made to maintain backwards compatibility. To allow operators to perform a seamless upgrade, it is not required to add the middleware to the proxy pipeline and the flag allow_versions in the container server configuration files are still valid, but only when using X-Versions-Location. In future releases, allow_versions will be deprecated in favor of adding this middleware to the pipeline to enable or disable the feature. In case the middleware is added to the proxy pipeline, you must also set allowversionedwrites to True in the middleware options to enable the information about this middleware to be returned in a /info request. Note You need to add the middleware to the proxy pipeline and set allowversionedwrites = True to use X-History-Location. Setting allow_versions = True in the container server is not sufficient to enable the use of X-History-Location. If allowversionedwrites is set in the filter configuration, you can leave the allow_versions flag in the container server configuration files untouched. If you decide to disable or remove the allow_versions flag, you must re-set any existing containers that had the X-Versions-Location flag configured so that it can now be tracked by the versioned_writes middleware. Clients should not use the X-History-Location header until all proxies in the cluster have been upgraded to a version of Swift that supports it. Attempting to use X-History-Location during a rolling upgrade may result in some requests being served by proxies running old code, leading to data loss. First, create a container with the X-Versions-Location header or add the header to an existing container. Also make sure the container referenced by the X-Versions-Location exists. In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-Versions-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` See a listing of the older versions of the object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` Now delete the current version of the object and see that the older version is gone from versions container and back in container container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ curl -i -XGET -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` As above, create a container with the X-History-Location header and ensure that the container referenced by the X-History-Location" }, { "data": "In this example, the name of that container is versions: ``` curl -i -XPUT -H \"X-Auth-Token: <token>\" -H \"X-History-Location: versions\" http://<storage_url>/container curl -i -XPUT -H \"X-Auth-Token: <token>\" http://<storage_url>/versions ``` Create an object (the first version): ``` curl -i -XPUT --data-binary 1 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now create a new version of that object: ``` curl -i -XPUT --data-binary 2 -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` Now delete the current version of the object. Subsequent requests will 404: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/container/myobject ``` A listing of the older versions of the object will include both the first and second versions of the object, as well as a delete marker object: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To restore a previous version, simply COPY it from the archive container: ``` curl -i -XCOPY -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> -H \"Destination: container/myobject\" ``` Note that the archive container still has all previous versions of the object, including the source for the restore: ``` curl -i -H \"X-Auth-Token: <token>\" http://<storage_url>/versions?prefix=008myobject/ ``` To permanently delete a previous version, DELETE it from the archive container: ``` curl -i -XDELETE -H \"X-Auth-Token: <token>\" http://<storage_url>/versions/008myobject/<timestamp> ``` If you want to disable all functionality, set allowversionedwrites to False in the middleware options. Disable versioning from a container (x is any value except empty): ``` curl -i -XPOST -H \"X-Auth-Token: <token>\" -H \"X-Remove-Versions-Location: x\" http://<storage_url>/container ``` Bases: WSGIContext Handle DELETE requests when in stack mode. Delete current version of object and pop previous version in its place. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. container_name container name. object_name object name. Handle DELETE requests when in history mode. Copy current version of object to versions_container and write a delete marker before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Copy current version of object to versions_container before proceeding with original request. req original request. versions_cont container where previous versions of the object are stored. api_version api version. account_name account name. object_name name of object of original request Profiling middleware for Swift Servers. The current implementation is based on eventlet aware profiler.(For the future, more profilers could be added in to collect more data for analysis.) Profiling all incoming requests and accumulating cpu timing statistics information for performance tuning and optimization. An mini web UI is also provided for profiling data analysis. It can be accessed from the URL as below. Index page for browse profile data: ``` http://SERVERIP:PORT/profile_ ``` List all profiles to return profile ids in json format: ``` http://SERVERIP:PORT/profile_/ http://SERVERIP:PORT/profile_/all ``` Retrieve specific profile data in different formats: ``` http://SERVERIP:PORT/profile/PROFILEID?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/current?format=[default|json|csv|ods] http://SERVERIP:PORT/profile_/all?format=[default|json|csv|ods] ``` Retrieve metrics from specific function in json format: ``` http://SERVERIP:PORT/profile/PROFILEID/NFL?format=json http://SERVERIP:PORT/profile_/current/NFL?format=json http://SERVERIP:PORT/profile_/all/NFL?format=json NFL is defined by concatenation of file name, function name and the first line number. e.g.:: account.py:50(GETorHEAD) or with full path: opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD) A list of URL examples: http://localhost:8080/profile (proxy server) http://localhost:6200/profile/all (object server) http://localhost:6201/profile/current (container server) http://localhost:6202/profile/12345?format=json (account server) ``` The profiling middleware can be configured in paste file for WSGI servers such as proxy, account, container and object servers. Please refer to the sample configuration files in etc directory. The profiling data is provided with four formats such as binary(by default), json, csv and odf spreadsheet which requires installing odfpy library: ``` sudo pip install odfpy ``` Theres also a simple visualization capability which is enabled by using matplotlib toolkit. it is also required to be installed if" } ]
{ "category": "Runtime", "file_name": "static-website.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "You can store multiple versions of your content so that you can recover from unintended overwrites. Object versioning is an easy way to implement version control, which you can use with any type of content. Note You cannot version a large-object manifest file, but the large-object manifest file can point to versioned segments. Note It is strongly recommended that you put non-current objects in a different container than the container where current object versions reside. To allow object versioning within a cluster, the cloud provider should add the versioned_writes filter to the pipeline and set the allowversionedwrites option to true in the [filter:versioned_writes] section of the proxy-server configuration file. To enable object versioning for a container, you must specify an archive container that will retain non-current versions via either the X-Versions-Location or X-History-Location header. These two headers enable two distinct modes of operation. Either mode may be used within a cluster, but only one mode may be active for any given container. You must UTF-8-encode and then URL-encode the container name before you include it in the header. For both modes, PUT requests will archive any pre-existing objects before writing new data, and GET requests will serve the current version. COPY requests behave like a GET followed by a PUT; that is, if the copy source is in a versioned container then the current version will be copied, and if the copy destination is in a versioned container then any pre-existing object will be archived before writing new data. If object versioning was enabled using X-History-Location, then object DELETE requests will copy the current version to the archive container then remove it from the versioned container. If object versioning was enabled using X-Versions-Location, then object DELETE requests will restore the most-recent version from the archive container, overwriting the current version. Create the current container: ``` ``` ``` HTTP/1.1 201 Created Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txb91810fb717347d09eec8-0052e18997 X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997 Date: Thu, 23 Jan 2014 21:28:55 GMT ``` Create the first version of an object in the current container: ``` ``` ``` HTTP/1.1 201 Created Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT Content-Length: 0 Etag: d41d8cd98f00b204e9800998ecf8427e Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a Date: Thu, 23 Jan 2014 21:31:22 GMT ``` Nothing is written to the non-current version container when you initially PUT an object in the current container. However, subsequent PUT requests that edit an object trigger the creation of a version of that object in the archive" }, { "data": "These non-current versions are named as follows: ``` <length><object_name>/<timestamp> ``` Where length is the 3-character, zero-padded hexadecimal character length of the object, <object_name> is the object name, and <timestamp> is the time when the object was initially created as a current version. Create a second version of the object in the current container: ``` ``` ``` HTTP/1.1 201 Created Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT Content-Length: 0 Etag: d41d8cd98f00b204e9800998ecf8427e Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c Date: Thu, 23 Jan 2014 21:41:32 GMT ``` Issue a GET request to a versioned object to get the current version of the object. You do not have to do any request redirects or metadata lookups. List older versions of the object in the archive container: ``` ``` ``` HTTP/1.1 200 OK Content-Length: 30 X-Container-Object-Count: 1 Accept-Ranges: bytes X-Timestamp: 1390513280.79684 X-Container-Bytes-Used: 0 Content-Type: text/plain; charset=utf-8 X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e Date: Thu, 23 Jan 2014 21:45:50 GMT 009my_object/1390512682.92052 ``` Note A POST request to a versioned object updates only the metadata for the object and does not create a new version of the object. New versions are created only when the content of the object changes. Issue a DELETE request to a versioned object to remove the current version of the object and replace it with the next-most current version in the non-current container. ``` ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd Date: Thu, 23 Jan 2014 21:51:25 GMT ``` List objects in the archive container to show that the archived object was moved back to the current container: ``` ``` ``` HTTP/1.1 204 No Content Content-Length: 0 X-Container-Object-Count: 0 Accept-Ranges: bytes X-Timestamp: 1390513280.79684 X-Container-Bytes-Used: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed Date: Thu, 23 Jan 2014 21:51:41 GMT ``` This next-most current version carries with it any metadata last set on it. If want to completely remove an object and you have five versions of it, you must DELETE it five times. Create the current container: ``` ``` ``` HTTP/1.1 201 Created Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txb91810fb717347d09eec8-0052e18997 X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997 Date: Thu, 23 Jan 2014 21:28:55 GMT ``` Create the first version of an object in the current container: ``` ``` ``` HTTP/1.1 201 Created Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT Content-Length: 0 Etag: d41d8cd98f00b204e9800998ecf8427e Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a Date: Thu, 23 Jan 2014 21:31:22 GMT ``` Nothing is written to the non-current version container when you initially PUT an object in the current container. However, subsequent PUT requests that edit an object trigger the creation of a version of that object in the archive" }, { "data": "These non-current versions are named as follows: ``` <length><object_name>/<timestamp> ``` Where length is the 3-character, zero-padded hexadecimal character length of the object, <object_name> is the object name, and <timestamp> is the time when the object was initially created as a current version. Create a second version of the object in the current container: ``` ``` ``` HTTP/1.1 201 Created Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT Content-Length: 0 Etag: d41d8cd98f00b204e9800998ecf8427e Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c Date: Thu, 23 Jan 2014 21:41:32 GMT ``` Issue a GET request to a versioned object to get the current version of the object. You do not have to do any request redirects or metadata lookups. List older versions of the object in the archive container: ``` ``` ``` HTTP/1.1 200 OK Content-Length: 30 X-Container-Object-Count: 1 Accept-Ranges: bytes X-Timestamp: 1390513280.79684 X-Container-Bytes-Used: 0 Content-Type: text/plain; charset=utf-8 X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e Date: Thu, 23 Jan 2014 21:45:50 GMT 009my_object/1390512682.92052 ``` Note A POST request to a versioned object updates only the metadata for the object and does not create a new version of the object. New versions are created only when the content of the object changes. Issue a DELETE request to a versioned object to copy the current version of the object to the archive container then delete it from the current container. Subsequent GET requests to the object in the current container will return 404 Not Found. ``` ``` ``` HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd Date: Thu, 23 Jan 2014 21:51:25 GMT ``` List older versions of the object in the archive container: ``` ``` ``` HTTP/1.1 200 OK Content-Length: 90 X-Container-Object-Count: 3 Accept-Ranges: bytes X-Timestamp: 1390513280.79684 X-Container-Bytes-Used: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed Date: Thu, 23 Jan 2014 21:51:41 GMT 009my_object/1390512682.92052 009my_object/1390512692.23062 009my_object/1390513885.67732 ``` In addition to the two previous versions of the object, the archive container has a delete marker to record when the object was deleted. To permanently delete a previous version, issue a DELETE to the version in the archive container. To disable object versioning for the current container, remove its X-Versions-Location metadata header by sending an empty key value. ``` ``` ``` HTTP/1.1 202 Accepted Content-Length: 76 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txe2476de217134549996d0-0052e19038 X-Openstack-Request-Id: txe2476de217134549996d0-0052e19038 Date: Thu, 23 Jan 2014 21:57:12 GMT <html><h1>Accepted</h1><p>The request is accepted for processing.</p></html> ``` Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "use_content-encoding_metadata.html.md", "project_name": "Swift", "subcategory": "Cloud Native Storage" }
[ { "data": "The owner of an Object Storage account controls access to that account and its containers and objects. An owner is the user who has the admin role for that tenant. The tenant is also known as the project or account. As the account owner, you can modify account metadata and create, modify, and delete containers and objects. To identify yourself as the account owner, include an authentication token in the X-Auth-Token header in the API request. Depending on the token value in the X-Auth-Token header, one of the following actions occur: X-Auth-Token contains the token for the account owner. The request is permitted and has full access to make changes to the account. The X-Auth-Token header is omitted or it contains a token for a non-owner or a token that is not valid. The request fails with a 401 Unauthorized or 403 Forbidden response. You have no access to accounts or containers, unless an access control list (ACL) explicitly grants access. The account owner can grant account and container access to users through access control lists (ACLs). In addition, it is possible to provide an additional token in the X-Service-Token header. More information about how this is used is in Using Swift as Backing Store for Service Data. The following list describes the authentication services that you can use with Object Storage: OpenStack Identity (keystone): For Object Storage, account is synonymous with project or tenant ID. Tempauth middleware: Object Storage includes this middleware. User and account management is performed in Object Storage itself. Swauth middleware: Stored in github, this custom middleware is modeled on Tempauth. Usage is similar to Tempauth. Other custom middleware: Write it yourself to fit your environment. Specifically, you use the X-Auth-Token header to pass an authentication token to an API request. Authentication tokens expire after a time period that the authentication service defines. When a token expires, use of the token causes requests to fail with a 401 Unauthorized response. To continue, you must obtain a new token. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. The OpenStack project is provided under the Apache 2.0 license. Docs.openstack.org is powered by Rackspace Cloud Computing." } ]
{ "category": "Runtime", "file_name": "index.html#getting-started.md", "project_name": "Triton Object Storage", "subcategory": "Cloud Native Storage" }
[ { "data": "Manta, Triton's object storage and converged analytics solution, is a highly scalable, distributed object storage service with integrated compute that enables the creation of analytics jobs (more generally, compute jobs) which process and transform data at rest. Developers can store and process any amount of data at any time where a simple web API call replaces the need for spinning up instances. Manta compute is a complete and high performance compute environment including R, Python, node.js, Perl, Ruby, Java, C/C++, ffmpeg, grep, awk and others. Metering is by the second with zero provisioning, data movement or scheduling latency costs. This page describes the service and how to get started. You can also skip straight to some compute examples. Some features of the service include There are a number of use cases that become possible when you have a facility for running compute jobs directly on object storage nodes. These are all possible without having to download or move your data to other instances. For more examples, see the Job Examples and Patterns page. These are systems that customers and tritondatacenter engineers have built on top of Manta. To use Triton's object storage, you need a Triton Compute account. If you don't already have an account, contact your administrator. Once you have signed up, you will need to add an SSH public key to your account. The Triton object storage service is just one of a family of services. Triton Public Cloud services range from instances in our standard Persistent Compute Service (metered by the hour, month, or year) to our ephemeral Manta compute service (by the second). All are designed to seamlessly work with our Object Storage and Data Services. This tutorial assumes you've signed up for a Triton account and have a public SSH key added to your account. We will cover installing the node.js SDK and CLI, setting up your shell environment variables, and then working through examples of creating directories, objects, links and finally running compute jobs on your data. The CLI is the only tool used in these examples, and the instructions assume you're doing this from a Mac OS X, SmartOS, Linux or BSD system, and know how to use SSH and a terminal application such as Terminal.app. It helps to be familiar with basic Unix facilities like the shells, pipes, stdin, and stdout. If you have at least node.js 0.8.x installed (0.10.x is recommended) you can install the CLI and SDK from an npm package. All of the examples below work with both node.js 0.8.x and 0.10.x. ``` sudo npm install manta -g ``` Additionally, as the API is JSON-based, the examples will refer to the json tool, which helps put JSON output in a more human readable format. You can install from npm: ``` sudo npm install json -g ``` Lastly, and while optional, if you want to use verbose debug logging with the SDK, you will want bunyan: ``` sudo npm install bunyan -g ``` While you can specify command line switches to all of the node-manta CLI programs, it is significantly easier for you to set them globally in your environment. There are four environment variables that all command line tools look for: Copy all of the text below, and paste it into your ~/.bash_profile or ~/.bashrc. ``` export MANTA_URL=https://us-central.manta.mnx.io export MANTAUSER=$TRITONCLOUDUSERNAME unset MANTA_SUBUSER # Unless you have subusers export MANTAKEYID=$(ssh-keygen -E md5 -l -f" }, { "data": "| awk '{print $2}' | tr -d '\\n' | cut -d: -f 2-) ``` An easy way to do this in Mac OS X, is to copy the text, then use the pbpaste command to add the text in the clipboard to your file. like this: ``` pbpaste >> ~/.bash_profile ``` Edit the ~/.bashprofile or ~/.bashrc file, replacing $TRITONCLOUDUSERNAME with your Triton Public Cloud username. Run ``` source ~/.bash_profile ``` or ``` source ~/.bashrc ``` or restart your terminal to pick up the changes you made to ~/.bash_profile or ~/.bashrc. Everything works if typing mls /$MANTA_USER/ returns the top level contents. ``` mls /$MANTA_USER/ jobs/ public/ reports/ stor/ uploads/ ``` The shortcut ~~ is equivalent to typing /$MANTA_USER. Since many operations require full Manta paths, you'll find it useful. We will use it for the remainder of this document. ``` mls ~~/ jobs/ public/ reports/ stor/ uploads/ ``` This Getting Started guide uses command line tools that are Manta analogs of common Unix tools (e.g. mls == ls). You can find man pages for these tools in the CLI Utilities Reference Now that you've signed up, have the CLI and have your environment variables set, you are ready to create data. In this section we will create an object, a subdirectory for you to place another object in, and create a SnapLink to one of those objects. These examples are written so that you can copy from here wherever you see a $ and paste directly into Terminal.app If you're the kind of person who likes understanding \"what all this is\" before going through examples, you can read about the Storage Architecture in the Object Storage Reference. Feel free to pause here, go read that, and then come right back to this point. Objects are the main entity you will use. An object is non-interpreted data of any size that you read and write to the store. Objects are immutable. You cannot append to them or edit them in place. When you overwrite an object, you completely replace it. By default, objects are replicated to two physical servers, but you can specify between one and six copies, depending on your needs. You will be charged for the number of bytes you consume, so specifying one copy is half the price of two, with the trade-off being a decrease in potential durability and availability. When you write an object, you give it a name. Object names (keys) look like Unix file paths. This is how you would create an object named ~~/stor/hello-foo that contains the data in the file hello.txt: ``` echo \"Hello, Manta\" > /tmp/hello.txt $ mput -f /tmp/hello.txt ~~/stor/hello-foo .../stor/hello-foo [==========================>] 100% 13B $ mget ~~/stor/hello-foo Hello, Manta ``` The service fully supports streaming uploads, so piping the classic \"Treasure Island\" would also work: ``` curl -sL http://www.gutenberg.org/ebooks/120.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/treasure_island.txt ``` In the example above, we don't have a local file, so mput doesn't attempt to set the MIME type. To make sure our object is properly readable by a browser, we set the HTTP Content-Type header explicitly. Now, about ~~/stor. Your \"namespace\" is /:login/stor. This is where all of your data that you would like to keep private is stored. In a moment we'll make some directories, but you can create any number of objects and directories in this namespace without conflicting with other users. In addition to /:login/stor, there is also /:login/public, which allows for unauthenticated reads over HTTP and" }, { "data": "This directory is useful for you to host world-readable files, such as media assets you would use in a CDN. All objects can be stored in Unix-like directories. As you have seen, /:login/stor is the top level directory. You can logically think of it like / in Unix environments. You can create any number of directories and sub-directories, but there is a limit to how many entries can exist in a single directory, which is 1,000,000 entries. In addition to /:login/stor, there are a few other top-level \"directories\" that are available to you. | 0 | 1 | |:-|:| | Directory | Description | | /:login/jobs | Job reports. Only you can read and destroy them; it is written by the system only. | | /:login/public | Public object storage. Anyone can access objects in this directory and its subdirectories. Only you can create and destroy them. | | /:login/reports | Usage and Access log reports. Only you can read and destroy them; it is written by the system only. | | /:login/uploads | Multipart uploads. Ongoing multipart uploads are stored in this directory. | | /:login/stor | Private object storage. Only you can create, destroy, and access objects in this directory and its subdirectories. | Directories are useful when you want to logically group objects (or other directories) and be able to list them efficiently (including feeding all the objects in a directory into parallelized compute jobs). Here are a few examples of creating, listing, and deleting directories: ``` mmkdir ~~/stor/stuff $ mls stuff/ treasure_island.txt $ mls ~~/stor/stuff $ mls -l ~~/stor drwxr-xr-x 1 loginname 0 May 15 17:02 stuff -rwxr-xr-x 1 loginname 391563 May 15 16:48 treasure_island.txt $ mmkdir -p ~~/stor/stuff/foo/bar/baz $ mrmdir ~~/stor/stuff/foo/bar/baz $ mrm -r ~~/stor/stuff ``` SnapLinks are a concept unique to the Manta service. SnapLinks are similar to a Unix hard-link, and because the system is \"copy on write,\" data changes are not reflected in the SnapLink. This property makes SnapLinks a very powerful entity that allows you to create any number of alternate names and versioning schemes that you like. As a concrete example, note what the following sequence of steps creates in the objects foo and bar: ``` echo \"Object One\" | mput ~~/stor/original $ mln /stor/original /stor/moved $ mget ~~/stor/moved Object One $ mget ~~/stor/original Object One $ echo \"Object Two\" | mput ~~/stor/original $ mget ~~/stor/original Object Two $ mget ~~/stor/moved Object One ``` As another example, while the service does not allow a \"move\" operation, you can mimic a move with SnapLinks: ``` mmkdir ~~/stor/books $ mln /stor/treasureisland.txt /stor/books/treasureisland.txt $ mrm ~~/stor/treasure_island.txt $ mls ~~/stor books/ foo moved original $ mls ~~/stor/books treasure_island.txt ``` You have now seen how to work with objects, directories, and SnapLinks. Now it is time to do some text processing. The jobs facility is designed to support operations on an arbitrary number of arbitrarily large objects. While performance considerations may dictate the optimal object size, the system can scale to very large datasets. You perform arbitrary compute tasks in an isolated OS instance, using MapReduce to manage distributed processing. MapReduce is a technique for dividing work across distributed servers, and dramatically reduces network bandwidth as the code you want to run on objects is brought to the physical server that holds the object(s), rather than transferring data to a processing" }, { "data": "The MapReduce implementation is unique in that you are given a full OS environment that allows you to run any code, as opposed to being bound to a particular framework/language. To demonstrate this, we will compose a MapReduce job purely using traditional Unix command line tools in the following examples. First, let's get a few more books into our data collection so we're processing more than one file: ``` curl -sL http://www.gutenberg.org/ebooks/1661.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/books/sherlock_holmes.txt $ curl -sL http://www.gutenberg.org/ebooks/76.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/books/huck_finn.txt $ curl -sL http://www.gutenberg.org/ebooks/2701.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/books/moby_dick.txt $ curl -sL http://www.gutenberg.org/ebooks/345.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/books/dracula.txt ``` Now, just to be sure you've got the same 5 files (and to learn about mfind), run the following: ``` mfind ~~/stor/books ~~/stor/books/dracula.txt ~~/stor/books/huck_finn.txt ~~/stor/books/moby_dick.txt ~~/stor/books/sherlock_holmes.txt ~~/stor/books/treasure_island.txt ``` mfind is powerful like Unix find, in that you specify a starting point and use basic regular expressions to match on names. This is another way to list the names of all the objects (-t o) that end in txt: ``` mfind -t o -n 'txt$' ~~/stor ~~/stor/books/dracula.txt ~~/stor/books/huck_finn.txt ~~/stor/books/moby_dick.txt ~~/stor/books/sherlock_holmes.txt ~~/stor/books/treasure_island.txt ``` Here's an example job that counts the number of times the word \"vampire\" appears in Dracula. ``` echo ~~/stor/books/dracula.txt | mjob create -o -m \"grep -ci vampire\" added 1 input to 7b39e12b-bb87-42a7-8c5f-deb9727fc362 32 ``` This command instructs the system to run grep -ci vampire on ~~/stor/books/dracula.txt. The -o flag tells mjob create to wait for the job to complete and then fetch and print the contents of the output objects. In this example, the result is 32. In more detail: this command creates a job to run the user script grep -ci vampire on each input object and then submits ~~/stor/books/dracula.txt as the only input to the job. The name of the job is (in this case) 7b39e12b-bb87-42a7-8c5f-deb9727fc362. When the job completes, the result is placed in an output object, which you can see with the mjob outputs command: ``` mjob outputs 7b39e12b-bb87-42a7-8c5f-deb9727fc362 /loginname/jobs/7b39e12b-bb87-42a7-8c5f-deb9727fc362/stor/loginname/stor/books/dracula.txt.0.1adb84bf-61b8-496f-b59a-57607b1797b0 ``` The output of the user script is in the contents of the output object: ``` mget $(mjob outputs 7b39e12b-bb87-42a7-8c5f-deb9727fc362) 32 ``` You can use a similar invocation to run the same job on all of the objects under ~~/stor/books: ``` mfind -t o ~~/stor/books | mjob create -o -m \"grep -ci human\" added 5 inputs to 69219541-fdab-441f-97f3-3317ef2c48c0 13 48 18 4 6 ``` In this example, the system runs 5 invocations of grep. Each of these is called a task. Each task produces one output, and the job itself winds up with 5 separate outputs. When searching for strings of text you need to put them inside single quotes ``` echo ~~/stor/books/treasure_island.txt | mjob create -o -m \"grep -ci 'you would be very wrong'\" added 1 input to 67cf98ac-063a-4e86-861a-b9a8ebc3618d 1 ``` If the grep command exits with a non-zero status (as grep does when it finds no matches in the input stream) or fails in some other way (e.g., dumps core), You'll see an error instead of an output object. You can get details on the error, including a link to stdout, stderr, and the core file (if any), using the mjob errors command. ``` mfind -t o ~~/stor/books | mjob create -o -m \"grep -ci vampires\" added 5 inputs to ef797aef-6254-4936-95a0-8b73414ff2f4 mjob: error: job ef797aef-6254-4936-95a0-8b73414ff2f4 had 4 errors ``` In this job, the four errors do not represent actual failures, but just objects with no match, so we can safely ignore them and look only at the output" }, { "data": "And this last one should have 5 \"errors\" ``` mfind -t o ~~/stor/books | mjob create -o -m \"grep -ci tweets\" added 5 inputs to ae47972a-c893-433a-a55f-b97ce643ffc0 mjob: error: job ae47972a-c893-433a-a55f-b97ce643ffc0 had 5 errors ``` We've just described the \"map\" phase of traditional map-reduce computations. The \"map\" phase performs the same computation on each of the input objects. The reduce phase typically combines the outputs from the map phase to produce a single output. One of the earlier examples computed the number of times the word \"human\" appeared in each book. We can use a simple awk script in the reduce phase to get the total number the of times \"human\" appears in all the books. ``` mfind -t o ~~/stor/books | \\ mjob create -o -m \"grep -ci human\" -r \"awk '{s+=\\$1} END{print s}'\" added 5 inputs to 12edb303-e481-4a39-b1c0-97d893ce0927 89 ``` This job has two phases: the map phase runs grep -ci human on each input object, then the reduce phase runs the awk script on the concatenated output from the first phase. awk '{s+=$1} END {print s}' sums a list of numbers, so it sums the list of numbers that come out of the first phase. You can combine several map and reduce phases. The outputs of any non-final phases become inputs for the next phase, and the outputs of the final phase become job outputs. While map phases always create one task for each input, reduce phases have a fixed number of tasks (just one by default). While map tasks get the contents of the input object on stdin as well as in a local file, reduce tasks only get a concatenated stream of all inputs. The inputs may be combined in any order, but data from separate inputs are never interleaved. In the next example, we'll also introduce an alternative ^ and ^^ to the -m and -r flags, and see the first appearance of maggr. Now we have 5 classic novels uploaded, on which we can perform some basic data analysis using nothing but Unix utilities. Let's first just see what the \"average\" length is (by number of words), which we can do using just the standard wc and the maggr command. ``` mfind -t o ~~/stor/books | mjob create -o 'wc -w' ^^ 'maggr mean' added 5 inputs to 69b747da-e636-4146-8bca-84b883ca2a8c 134486.4 ``` Let's break down what just happened in that magical one-liner. First, we'll look at the mjob create command. mjob create -o submits a new job, and then waits for the job to finish, then fetches and concatenates the outputs for you, which is very useful for interactive ad-hoc queries. 'wc -w' ^^ 'maggr mean' is a MapReduce definition that defines a 'map' \"phase\" of wc -w, and a reduce \"phase\" of maggr mean. maggr is one of several tools we have in the compute instances that mirror similar Unix tools. A \"phase\" is simply a command (or chain of commands) to execute on data. There are two types of phases: map and reduce. Map phases run the given command on every input object and stream the output to the next phase, which may be another map phase, or likely a reduce phase. Reduce phases are run once, and concatenate all data output from the previous phase. The system runs your map-reduce commands by invoking them in a new bash" }, { "data": "By default your input data is available to your shell over stdin, and if you simply write output data to stdout, it is captured and moved to the next phase (this is how almost all standard Unix utilities work). mjob create uses the symbols ^ and ^^ to act like the standard Unix | (pipe) operator. The single ^ character indicates that the following command is part of the map phase. The double ^^ indicates that the following command is a reduce phase. In this syntax, the first phase is always a map phase. So the string 'wc -w' ^^ 'maggr mean', means \"execute wc -w on all objects given to the job\" and \"then run maggr mean on the data output from wc -w.\" maggr is a basic math utility function that is part of the compute environment. The above command could also have been written as: ``` mfind -t o ~~/stor/books | \\ mjob create -o 'wc -w' ^^ 'paste -sd+ | echo \"($(cat -))/$(mjob inputs $MANTAJOBID | wc -l)\" | bc' ``` Which would create a mathematical string that bc can use that sums and then calculates the average by dividing by the number of inputs (which is retrieved dynamically). Although the compute facility provides a full SmartOS environment, your jobs may require special software, additional configuration information, or any other static file that is useful. You can make these available as assets, which are objects that are copied into the compute environment when your job is run. For example suppose you want to do a word frequency count using shell scripts that contain your map and reduce logic. We can do this with two awk scripts, so let's write them and upload them as assets. map.sh outputs a mapping of word to occurrence, like hello 10: ``` { for (i = 1; i <= NF; i++) { counts[$i]++ } } END { for (i in counts) { print i, counts[i]; } } ``` Copy the above and paste into a file named map.sh, or if you are on Mac OS X, you can use the command below ``` pbpaste > map.sh ``` red.sh simply combines the output of all the map outputs: ``` { byword[$1] += $2; } END { for (i in byword) { print i, byword[i] } } ``` Copy the above and paste into a file named red.sh, or if you are on Mac OS X, you can use the command below ``` pbpaste > red.sh ``` To make the scripts available as assets, first store them in the service. ``` mput -f map.sh ~~/stor/map.sh $ mput -f red.sh ~~/stor/red.sh ``` Then use the -s switch to specify and use them in a job: ``` mfind -t o ~~/stor/books | mjob create -o -s ~~/stor/map.sh \\ -m '/assets/$MANTA_USER/stor/map.sh' \\ -s ~~/stor/red.sh \\ -r '/assets/$MANTA_USER/stor/red.sh | sort -k2,2 -n' ``` You'll see a trailing output like ``` ... a 13451 to 14979 of 15314 and 21338 the 32241 ``` If you'd like to see how long this takes ``` time mfind -t o ~~/stor/books | mjob create -o -s ~~/stor/map.sh \\ -m '/assets/$MANTA_USER/stor/map.sh' \\ -s ~~/stor/red.sh \\ -r '/assets/$MANTA_USER/stor/red.sh | sort -k2,2 -n' ``` The time output at the end will look like ``` real 0m7.942s user 0m1.324s sys 0m0.169s ``` Note that assets are made available to you in the compute environment under the path" }, { "data": "A more sophisticated program would likely use a list of stopwords to get rid of common words like \"and, the\" and so on, which could also be mapped in as an asset. This introduction gave you a basic overview of Manta storage service: how to work with objects and how to use the system's compute environment to operate on those objects. The system provides many more sophisticated features, including: Let take you through some simple examples of running node.js applications directly on the object store. We'll be using some assets that are present in the mantademo account. This is also a good example how you can run compute with and on data people have made available in their ~~/public directories. We'll start with a \"Hello,Manta\" demo using node.js, you can see the script with an mget: ``` mget /mantademo/public/hello-manta-node.js console.log(\"hello,manta!!\"); ``` Now let's create a job using the what we talked about above in the Running Jobs Using Assets section. We're going to start by both something that's \"obvious\" and won't work. ``` mjob create -s /mantademo/public/hello-manta-node.js -m \"node /mantademo/public/hello-manta-node.js\" 30706a6b-6386-495b-9657-8a572b99d4f8 [this is a unique JOB ID] $ mjob get 30706a6b-6386-495b-9657-8a572b99d4f8 [replace with your actual JOB ID] { \"id\": \"30706a6b-6386-495b-9657-8a572b99d4f8\", \"name\": \"\", \"state\": \"running\", \"cancelled\": false, \"inputDone\": false, \"stats\": { \"errors\": 0, \"outputs\": 0, \"retries\": 0, \"tasks\": 0, \"tasksDone\": 0 }, \"timeCreated\": \"2013-06-16T19:47:30.610Z\", \"phases\": [ { \"assets\": [ \"/mantademo/public/hello-manta-node.js\" ], \"exec\": \"node /mantademo/public/hello-manta-node.js\", \"type\": \"map\" } ], \"options\": {} } ``` The inputDone field is \"false\" because we asked mjob to create a map phase, which requires at least one key, but we did not provide any keys. It's sort of an artifact of the hello world example and makes a important point. Let's cancel this job, in fact, let's cancel all jobs so we can clean up anything we've left running from the examples above. ``` mjob list -s running | xargs mjob cancel ``` This also highlights that any CLI tool is normal Unix. The following two commands are equivalent. ``` mjob get `mjob list` $ mjob list | xargs mjob get ``` Back to the node.js example, if we pipe the hello-manta-node.js in as a key and do it as a map phase with the -m flag: ``` echo /mantademo/public/hello-manta-node.js | mjob create -o -m \"node\" added 1 input to e7711dda-caac-412f-9355-61c8006819ae hello,manta!! ``` We can also do this as a reduce phase (using the -r flag). Reduce phases always run, even without keys. ``` mjob create -o </dev/null -s /mantademo/public/hello-manta-node.js \\ -r \"node /assets/mantademo/public/hello-manta-node.js\" hello,manta!! ``` The flag -o </dev/null is that so that we're redirecting from /dev/null and mjob create knows to not attempt to read any additional keys. Now let's take it up one more level. You can see what's inside a simple node.js application that capitalizes all the text in an input file. ``` mget /mantademo/public/capitalizer.js process.stdin.on('data', function(d) { process.stdout.write(d.toString().replace(/\\./g, '!').toUpperCase()); }); process.stdin.resume(); $ mget /mantademo/public/manta-desc.txt Manta Storage Service is a cloud service that offers both a highly available, highly durable object store and integrated compute. Application developers can store and process any amount of data at any time, from any location, without requiring additional compute resources. $ echo /mantademo/public/manta-desc.txt | mjob create -o -s /mantademo/public/capitalizer.js -m 'node /assets/mantademo/public/capitalizer.js' added 1 input to 2aa8a0a9-92e9-47f3-8b66-acf2a22d25a8 MANTA STORAGE SERVICE IS A CLOUD SERVICE THAT OFFERS BOTH A HIGHLY AVAILABLE, HIGHLY DURABLE OBJECT STORE AND INTEGRATED COMPUTE! APPLICATION DEVELOPERS CAN STORE AND PROCESS ANY AMOUNT OF DATA AT ANY TIME, FROM ANY LOCATION, WITHOUT REQUIRING ADDITIONAL COMPUTE RESOURCES! ``` For more details compute jobs see the Compute Jobs Reference documentation, along with the default installed software and" } ]
{ "category": "Runtime", "file_name": "index.html.md", "project_name": "Triton Object Storage", "subcategory": "Cloud Native Storage" }
[ { "data": "Manta, Triton's object storage and converged analytics solution, is a highly scalable, distributed object storage service with integrated compute that enables the creation of analytics jobs (more generally, compute jobs) which process and transform data at rest. Developers can store and process any amount of data at any time where a simple web API call replaces the need for spinning up instances. Manta compute is a complete and high performance compute environment including R, Python, node.js, Perl, Ruby, Java, C/C++, ffmpeg, grep, awk and others. Metering is by the second with zero provisioning, data movement or scheduling latency costs. This page describes the service and how to get started. You can also skip straight to some compute examples. Some features of the service include There are a number of use cases that become possible when you have a facility for running compute jobs directly on object storage nodes. These are all possible without having to download or move your data to other instances. For more examples, see the Job Examples and Patterns page. These are systems that customers and tritondatacenter engineers have built on top of Manta. To use Triton's object storage, you need a Triton Compute account. If you don't already have an account, contact your administrator. Once you have signed up, you will need to add an SSH public key to your account. The Triton object storage service is just one of a family of services. Triton Public Cloud services range from instances in our standard Persistent Compute Service (metered by the hour, month, or year) to our ephemeral Manta compute service (by the second). All are designed to seamlessly work with our Object Storage and Data Services. This tutorial assumes you've signed up for a Triton account and have a public SSH key added to your account. We will cover installing the node.js SDK and CLI, setting up your shell environment variables, and then working through examples of creating directories, objects, links and finally running compute jobs on your data. The CLI is the only tool used in these examples, and the instructions assume you're doing this from a Mac OS X, SmartOS, Linux or BSD system, and know how to use SSH and a terminal application such as Terminal.app. It helps to be familiar with basic Unix facilities like the shells, pipes, stdin, and stdout. If you have at least node.js 0.8.x installed (0.10.x is recommended) you can install the CLI and SDK from an npm package. All of the examples below work with both node.js 0.8.x and 0.10.x. ``` sudo npm install manta -g ``` Additionally, as the API is JSON-based, the examples will refer to the json tool, which helps put JSON output in a more human readable format. You can install from npm: ``` sudo npm install json -g ``` Lastly, and while optional, if you want to use verbose debug logging with the SDK, you will want bunyan: ``` sudo npm install bunyan -g ``` While you can specify command line switches to all of the node-manta CLI programs, it is significantly easier for you to set them globally in your environment. There are four environment variables that all command line tools look for: Copy all of the text below, and paste it into your ~/.bash_profile or ~/.bashrc. ``` export MANTA_URL=https://us-central.manta.mnx.io export MANTAUSER=$TRITONCLOUDUSERNAME unset MANTA_SUBUSER # Unless you have subusers export MANTAKEYID=$(ssh-keygen -E md5 -l -f" }, { "data": "| awk '{print $2}' | tr -d '\\n' | cut -d: -f 2-) ``` An easy way to do this in Mac OS X, is to copy the text, then use the pbpaste command to add the text in the clipboard to your file. like this: ``` pbpaste >> ~/.bash_profile ``` Edit the ~/.bashprofile or ~/.bashrc file, replacing $TRITONCLOUDUSERNAME with your Triton Public Cloud username. Run ``` source ~/.bash_profile ``` or ``` source ~/.bashrc ``` or restart your terminal to pick up the changes you made to ~/.bash_profile or ~/.bashrc. Everything works if typing mls /$MANTA_USER/ returns the top level contents. ``` mls /$MANTA_USER/ jobs/ public/ reports/ stor/ uploads/ ``` The shortcut ~~ is equivalent to typing /$MANTA_USER. Since many operations require full Manta paths, you'll find it useful. We will use it for the remainder of this document. ``` mls ~~/ jobs/ public/ reports/ stor/ uploads/ ``` This Getting Started guide uses command line tools that are Manta analogs of common Unix tools (e.g. mls == ls). You can find man pages for these tools in the CLI Utilities Reference Now that you've signed up, have the CLI and have your environment variables set, you are ready to create data. In this section we will create an object, a subdirectory for you to place another object in, and create a SnapLink to one of those objects. These examples are written so that you can copy from here wherever you see a $ and paste directly into Terminal.app If you're the kind of person who likes understanding \"what all this is\" before going through examples, you can read about the Storage Architecture in the Object Storage Reference. Feel free to pause here, go read that, and then come right back to this point. Objects are the main entity you will use. An object is non-interpreted data of any size that you read and write to the store. Objects are immutable. You cannot append to them or edit them in place. When you overwrite an object, you completely replace it. By default, objects are replicated to two physical servers, but you can specify between one and six copies, depending on your needs. You will be charged for the number of bytes you consume, so specifying one copy is half the price of two, with the trade-off being a decrease in potential durability and availability. When you write an object, you give it a name. Object names (keys) look like Unix file paths. This is how you would create an object named ~~/stor/hello-foo that contains the data in the file hello.txt: ``` echo \"Hello, Manta\" > /tmp/hello.txt $ mput -f /tmp/hello.txt ~~/stor/hello-foo .../stor/hello-foo [==========================>] 100% 13B $ mget ~~/stor/hello-foo Hello, Manta ``` The service fully supports streaming uploads, so piping the classic \"Treasure Island\" would also work: ``` curl -sL http://www.gutenberg.org/ebooks/120.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/treasure_island.txt ``` In the example above, we don't have a local file, so mput doesn't attempt to set the MIME type. To make sure our object is properly readable by a browser, we set the HTTP Content-Type header explicitly. Now, about ~~/stor. Your \"namespace\" is /:login/stor. This is where all of your data that you would like to keep private is stored. In a moment we'll make some directories, but you can create any number of objects and directories in this namespace without conflicting with other users. In addition to /:login/stor, there is also /:login/public, which allows for unauthenticated reads over HTTP and" }, { "data": "This directory is useful for you to host world-readable files, such as media assets you would use in a CDN. All objects can be stored in Unix-like directories. As you have seen, /:login/stor is the top level directory. You can logically think of it like / in Unix environments. You can create any number of directories and sub-directories, but there is a limit to how many entries can exist in a single directory, which is 1,000,000 entries. In addition to /:login/stor, there are a few other top-level \"directories\" that are available to you. | 0 | 1 | |:-|:| | Directory | Description | | /:login/jobs | Job reports. Only you can read and destroy them; it is written by the system only. | | /:login/public | Public object storage. Anyone can access objects in this directory and its subdirectories. Only you can create and destroy them. | | /:login/reports | Usage and Access log reports. Only you can read and destroy them; it is written by the system only. | | /:login/uploads | Multipart uploads. Ongoing multipart uploads are stored in this directory. | | /:login/stor | Private object storage. Only you can create, destroy, and access objects in this directory and its subdirectories. | Directories are useful when you want to logically group objects (or other directories) and be able to list them efficiently (including feeding all the objects in a directory into parallelized compute jobs). Here are a few examples of creating, listing, and deleting directories: ``` mmkdir ~~/stor/stuff $ mls stuff/ treasure_island.txt $ mls ~~/stor/stuff $ mls -l ~~/stor drwxr-xr-x 1 loginname 0 May 15 17:02 stuff -rwxr-xr-x 1 loginname 391563 May 15 16:48 treasure_island.txt $ mmkdir -p ~~/stor/stuff/foo/bar/baz $ mrmdir ~~/stor/stuff/foo/bar/baz $ mrm -r ~~/stor/stuff ``` SnapLinks are a concept unique to the Manta service. SnapLinks are similar to a Unix hard-link, and because the system is \"copy on write,\" data changes are not reflected in the SnapLink. This property makes SnapLinks a very powerful entity that allows you to create any number of alternate names and versioning schemes that you like. As a concrete example, note what the following sequence of steps creates in the objects foo and bar: ``` echo \"Object One\" | mput ~~/stor/original $ mln /stor/original /stor/moved $ mget ~~/stor/moved Object One $ mget ~~/stor/original Object One $ echo \"Object Two\" | mput ~~/stor/original $ mget ~~/stor/original Object Two $ mget ~~/stor/moved Object One ``` As another example, while the service does not allow a \"move\" operation, you can mimic a move with SnapLinks: ``` mmkdir ~~/stor/books $ mln /stor/treasureisland.txt /stor/books/treasureisland.txt $ mrm ~~/stor/treasure_island.txt $ mls ~~/stor books/ foo moved original $ mls ~~/stor/books treasure_island.txt ``` You have now seen how to work with objects, directories, and SnapLinks. Now it is time to do some text processing. The jobs facility is designed to support operations on an arbitrary number of arbitrarily large objects. While performance considerations may dictate the optimal object size, the system can scale to very large datasets. You perform arbitrary compute tasks in an isolated OS instance, using MapReduce to manage distributed processing. MapReduce is a technique for dividing work across distributed servers, and dramatically reduces network bandwidth as the code you want to run on objects is brought to the physical server that holds the object(s), rather than transferring data to a processing" }, { "data": "The MapReduce implementation is unique in that you are given a full OS environment that allows you to run any code, as opposed to being bound to a particular framework/language. To demonstrate this, we will compose a MapReduce job purely using traditional Unix command line tools in the following examples. First, let's get a few more books into our data collection so we're processing more than one file: ``` curl -sL http://www.gutenberg.org/ebooks/1661.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/books/sherlock_holmes.txt $ curl -sL http://www.gutenberg.org/ebooks/76.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/books/huck_finn.txt $ curl -sL http://www.gutenberg.org/ebooks/2701.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/books/moby_dick.txt $ curl -sL http://www.gutenberg.org/ebooks/345.txt.utf-8 | \\ mput -H 'content-type: text/plain' ~~/stor/books/dracula.txt ``` Now, just to be sure you've got the same 5 files (and to learn about mfind), run the following: ``` mfind ~~/stor/books ~~/stor/books/dracula.txt ~~/stor/books/huck_finn.txt ~~/stor/books/moby_dick.txt ~~/stor/books/sherlock_holmes.txt ~~/stor/books/treasure_island.txt ``` mfind is powerful like Unix find, in that you specify a starting point and use basic regular expressions to match on names. This is another way to list the names of all the objects (-t o) that end in txt: ``` mfind -t o -n 'txt$' ~~/stor ~~/stor/books/dracula.txt ~~/stor/books/huck_finn.txt ~~/stor/books/moby_dick.txt ~~/stor/books/sherlock_holmes.txt ~~/stor/books/treasure_island.txt ``` Here's an example job that counts the number of times the word \"vampire\" appears in Dracula. ``` echo ~~/stor/books/dracula.txt | mjob create -o -m \"grep -ci vampire\" added 1 input to 7b39e12b-bb87-42a7-8c5f-deb9727fc362 32 ``` This command instructs the system to run grep -ci vampire on ~~/stor/books/dracula.txt. The -o flag tells mjob create to wait for the job to complete and then fetch and print the contents of the output objects. In this example, the result is 32. In more detail: this command creates a job to run the user script grep -ci vampire on each input object and then submits ~~/stor/books/dracula.txt as the only input to the job. The name of the job is (in this case) 7b39e12b-bb87-42a7-8c5f-deb9727fc362. When the job completes, the result is placed in an output object, which you can see with the mjob outputs command: ``` mjob outputs 7b39e12b-bb87-42a7-8c5f-deb9727fc362 /loginname/jobs/7b39e12b-bb87-42a7-8c5f-deb9727fc362/stor/loginname/stor/books/dracula.txt.0.1adb84bf-61b8-496f-b59a-57607b1797b0 ``` The output of the user script is in the contents of the output object: ``` mget $(mjob outputs 7b39e12b-bb87-42a7-8c5f-deb9727fc362) 32 ``` You can use a similar invocation to run the same job on all of the objects under ~~/stor/books: ``` mfind -t o ~~/stor/books | mjob create -o -m \"grep -ci human\" added 5 inputs to 69219541-fdab-441f-97f3-3317ef2c48c0 13 48 18 4 6 ``` In this example, the system runs 5 invocations of grep. Each of these is called a task. Each task produces one output, and the job itself winds up with 5 separate outputs. When searching for strings of text you need to put them inside single quotes ``` echo ~~/stor/books/treasure_island.txt | mjob create -o -m \"grep -ci 'you would be very wrong'\" added 1 input to 67cf98ac-063a-4e86-861a-b9a8ebc3618d 1 ``` If the grep command exits with a non-zero status (as grep does when it finds no matches in the input stream) or fails in some other way (e.g., dumps core), You'll see an error instead of an output object. You can get details on the error, including a link to stdout, stderr, and the core file (if any), using the mjob errors command. ``` mfind -t o ~~/stor/books | mjob create -o -m \"grep -ci vampires\" added 5 inputs to ef797aef-6254-4936-95a0-8b73414ff2f4 mjob: error: job ef797aef-6254-4936-95a0-8b73414ff2f4 had 4 errors ``` In this job, the four errors do not represent actual failures, but just objects with no match, so we can safely ignore them and look only at the output" }, { "data": "And this last one should have 5 \"errors\" ``` mfind -t o ~~/stor/books | mjob create -o -m \"grep -ci tweets\" added 5 inputs to ae47972a-c893-433a-a55f-b97ce643ffc0 mjob: error: job ae47972a-c893-433a-a55f-b97ce643ffc0 had 5 errors ``` We've just described the \"map\" phase of traditional map-reduce computations. The \"map\" phase performs the same computation on each of the input objects. The reduce phase typically combines the outputs from the map phase to produce a single output. One of the earlier examples computed the number of times the word \"human\" appeared in each book. We can use a simple awk script in the reduce phase to get the total number the of times \"human\" appears in all the books. ``` mfind -t o ~~/stor/books | \\ mjob create -o -m \"grep -ci human\" -r \"awk '{s+=\\$1} END{print s}'\" added 5 inputs to 12edb303-e481-4a39-b1c0-97d893ce0927 89 ``` This job has two phases: the map phase runs grep -ci human on each input object, then the reduce phase runs the awk script on the concatenated output from the first phase. awk '{s+=$1} END {print s}' sums a list of numbers, so it sums the list of numbers that come out of the first phase. You can combine several map and reduce phases. The outputs of any non-final phases become inputs for the next phase, and the outputs of the final phase become job outputs. While map phases always create one task for each input, reduce phases have a fixed number of tasks (just one by default). While map tasks get the contents of the input object on stdin as well as in a local file, reduce tasks only get a concatenated stream of all inputs. The inputs may be combined in any order, but data from separate inputs are never interleaved. In the next example, we'll also introduce an alternative ^ and ^^ to the -m and -r flags, and see the first appearance of maggr. Now we have 5 classic novels uploaded, on which we can perform some basic data analysis using nothing but Unix utilities. Let's first just see what the \"average\" length is (by number of words), which we can do using just the standard wc and the maggr command. ``` mfind -t o ~~/stor/books | mjob create -o 'wc -w' ^^ 'maggr mean' added 5 inputs to 69b747da-e636-4146-8bca-84b883ca2a8c 134486.4 ``` Let's break down what just happened in that magical one-liner. First, we'll look at the mjob create command. mjob create -o submits a new job, and then waits for the job to finish, then fetches and concatenates the outputs for you, which is very useful for interactive ad-hoc queries. 'wc -w' ^^ 'maggr mean' is a MapReduce definition that defines a 'map' \"phase\" of wc -w, and a reduce \"phase\" of maggr mean. maggr is one of several tools we have in the compute instances that mirror similar Unix tools. A \"phase\" is simply a command (or chain of commands) to execute on data. There are two types of phases: map and reduce. Map phases run the given command on every input object and stream the output to the next phase, which may be another map phase, or likely a reduce phase. Reduce phases are run once, and concatenate all data output from the previous phase. The system runs your map-reduce commands by invoking them in a new bash" }, { "data": "By default your input data is available to your shell over stdin, and if you simply write output data to stdout, it is captured and moved to the next phase (this is how almost all standard Unix utilities work). mjob create uses the symbols ^ and ^^ to act like the standard Unix | (pipe) operator. The single ^ character indicates that the following command is part of the map phase. The double ^^ indicates that the following command is a reduce phase. In this syntax, the first phase is always a map phase. So the string 'wc -w' ^^ 'maggr mean', means \"execute wc -w on all objects given to the job\" and \"then run maggr mean on the data output from wc -w.\" maggr is a basic math utility function that is part of the compute environment. The above command could also have been written as: ``` mfind -t o ~~/stor/books | \\ mjob create -o 'wc -w' ^^ 'paste -sd+ | echo \"($(cat -))/$(mjob inputs $MANTAJOBID | wc -l)\" | bc' ``` Which would create a mathematical string that bc can use that sums and then calculates the average by dividing by the number of inputs (which is retrieved dynamically). Although the compute facility provides a full SmartOS environment, your jobs may require special software, additional configuration information, or any other static file that is useful. You can make these available as assets, which are objects that are copied into the compute environment when your job is run. For example suppose you want to do a word frequency count using shell scripts that contain your map and reduce logic. We can do this with two awk scripts, so let's write them and upload them as assets. map.sh outputs a mapping of word to occurrence, like hello 10: ``` { for (i = 1; i <= NF; i++) { counts[$i]++ } } END { for (i in counts) { print i, counts[i]; } } ``` Copy the above and paste into a file named map.sh, or if you are on Mac OS X, you can use the command below ``` pbpaste > map.sh ``` red.sh simply combines the output of all the map outputs: ``` { byword[$1] += $2; } END { for (i in byword) { print i, byword[i] } } ``` Copy the above and paste into a file named red.sh, or if you are on Mac OS X, you can use the command below ``` pbpaste > red.sh ``` To make the scripts available as assets, first store them in the service. ``` mput -f map.sh ~~/stor/map.sh $ mput -f red.sh ~~/stor/red.sh ``` Then use the -s switch to specify and use them in a job: ``` mfind -t o ~~/stor/books | mjob create -o -s ~~/stor/map.sh \\ -m '/assets/$MANTA_USER/stor/map.sh' \\ -s ~~/stor/red.sh \\ -r '/assets/$MANTA_USER/stor/red.sh | sort -k2,2 -n' ``` You'll see a trailing output like ``` ... a 13451 to 14979 of 15314 and 21338 the 32241 ``` If you'd like to see how long this takes ``` time mfind -t o ~~/stor/books | mjob create -o -s ~~/stor/map.sh \\ -m '/assets/$MANTA_USER/stor/map.sh' \\ -s ~~/stor/red.sh \\ -r '/assets/$MANTA_USER/stor/red.sh | sort -k2,2 -n' ``` The time output at the end will look like ``` real 0m7.942s user 0m1.324s sys 0m0.169s ``` Note that assets are made available to you in the compute environment under the path" }, { "data": "A more sophisticated program would likely use a list of stopwords to get rid of common words like \"and, the\" and so on, which could also be mapped in as an asset. This introduction gave you a basic overview of Manta storage service: how to work with objects and how to use the system's compute environment to operate on those objects. The system provides many more sophisticated features, including: Let take you through some simple examples of running node.js applications directly on the object store. We'll be using some assets that are present in the mantademo account. This is also a good example how you can run compute with and on data people have made available in their ~~/public directories. We'll start with a \"Hello,Manta\" demo using node.js, you can see the script with an mget: ``` mget /mantademo/public/hello-manta-node.js console.log(\"hello,manta!!\"); ``` Now let's create a job using the what we talked about above in the Running Jobs Using Assets section. We're going to start by both something that's \"obvious\" and won't work. ``` mjob create -s /mantademo/public/hello-manta-node.js -m \"node /mantademo/public/hello-manta-node.js\" 30706a6b-6386-495b-9657-8a572b99d4f8 [this is a unique JOB ID] $ mjob get 30706a6b-6386-495b-9657-8a572b99d4f8 [replace with your actual JOB ID] { \"id\": \"30706a6b-6386-495b-9657-8a572b99d4f8\", \"name\": \"\", \"state\": \"running\", \"cancelled\": false, \"inputDone\": false, \"stats\": { \"errors\": 0, \"outputs\": 0, \"retries\": 0, \"tasks\": 0, \"tasksDone\": 0 }, \"timeCreated\": \"2013-06-16T19:47:30.610Z\", \"phases\": [ { \"assets\": [ \"/mantademo/public/hello-manta-node.js\" ], \"exec\": \"node /mantademo/public/hello-manta-node.js\", \"type\": \"map\" } ], \"options\": {} } ``` The inputDone field is \"false\" because we asked mjob to create a map phase, which requires at least one key, but we did not provide any keys. It's sort of an artifact of the hello world example and makes a important point. Let's cancel this job, in fact, let's cancel all jobs so we can clean up anything we've left running from the examples above. ``` mjob list -s running | xargs mjob cancel ``` This also highlights that any CLI tool is normal Unix. The following two commands are equivalent. ``` mjob get `mjob list` $ mjob list | xargs mjob get ``` Back to the node.js example, if we pipe the hello-manta-node.js in as a key and do it as a map phase with the -m flag: ``` echo /mantademo/public/hello-manta-node.js | mjob create -o -m \"node\" added 1 input to e7711dda-caac-412f-9355-61c8006819ae hello,manta!! ``` We can also do this as a reduce phase (using the -r flag). Reduce phases always run, even without keys. ``` mjob create -o </dev/null -s /mantademo/public/hello-manta-node.js \\ -r \"node /assets/mantademo/public/hello-manta-node.js\" hello,manta!! ``` The flag -o </dev/null is that so that we're redirecting from /dev/null and mjob create knows to not attempt to read any additional keys. Now let's take it up one more level. You can see what's inside a simple node.js application that capitalizes all the text in an input file. ``` mget /mantademo/public/capitalizer.js process.stdin.on('data', function(d) { process.stdout.write(d.toString().replace(/\\./g, '!').toUpperCase()); }); process.stdin.resume(); $ mget /mantademo/public/manta-desc.txt Manta Storage Service is a cloud service that offers both a highly available, highly durable object store and integrated compute. Application developers can store and process any amount of data at any time, from any location, without requiring additional compute resources. $ echo /mantademo/public/manta-desc.txt | mjob create -o -s /mantademo/public/capitalizer.js -m 'node /assets/mantademo/public/capitalizer.js' added 1 input to 2aa8a0a9-92e9-47f3-8b66-acf2a22d25a8 MANTA STORAGE SERVICE IS A CLOUD SERVICE THAT OFFERS BOTH A HIGHLY AVAILABLE, HIGHLY DURABLE OBJECT STORE AND INTEGRATED COMPUTE! APPLICATION DEVELOPERS CAN STORE AND PROCESS ANY AMOUNT OF DATA AT ANY TIME, FROM ANY LOCATION, WITHOUT REQUIRING ADDITIONAL COMPUTE RESOURCES! ``` For more details compute jobs see the Compute Jobs Reference documentation, along with the default installed software and" } ]
{ "category": "Runtime", "file_name": "docs.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you: Velero consists of: This site is our documentation home with installation instructions, plus information about customizing Velero for your needs, architecture, extending Velero, contributing to Velero and more. Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero. If you encounter issues, review the troubleshooting docs, file an issue, or talk to us on the If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our Start contributing documentation for guidance on how to setup Velero for development. See the list of releases to find out about feature changes. To help you get started, see the documentation." } ]
{ "category": "Runtime", "file_name": "operator-guide.md", "project_name": "Triton Object Storage", "subcategory": "Cloud Native Storage" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:|:|:|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | README.md | README.md | README.md | nan | nan | | architecture.md | architecture.md | architecture.md | nan | nan | | deployment.md | deployment.md | deployment.md | nan | nan | | maintenance.md | maintenance.md | maintenance.md | nan | nan | | mantav2-migration.md | mantav2-migration.md | mantav2-migration.md | nan | nan | | View all files | View all files | View all files | nan | nan | (Note: This is the operator guide for Mantav2. If you are operating a mantav1 deployment, please see the Mantav1 Operator Guide.) This operator guide is divided into sections: Manta is an internet-facing object store. The user interface to Manta is essentially: Users can interact with Manta through the official Node.js CLI; the Node, or Java SDKs; curl(1); or any web browser. For more information, see the Manta user guide." } ]
{ "category": "Runtime", "file_name": "v1.13.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you: Velero consists of: This site is our documentation home with installation instructions, plus information about customizing Velero for your needs, architecture, extending Velero, contributing to Velero and more. Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero. If you encounter issues, review the troubleshooting docs, file an issue, or talk to us on the If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our Start contributing documentation for guidance on how to setup Velero for development. See the list of releases to find out about feature changes. To help you get started, see the documentation." } ]
{ "category": "Runtime", "file_name": "docs.html.md", "project_name": "Vineyard", "subcategory": "Cloud Native Storage" }
[ { "data": "User Guides Cloud-Native Tutorials Integration API Reference Developer Guides an in-memory immutable data manager Sharing intermediate data between systems in modern big data and AI workflows can be challenging, often causing significant bottlenecks in such jobs. Lets consider the following fraud detection pipeline: A real-life fraud detection job From the pipeline, we observed: Users usually prefer to program with dedicated computing systems for different tasks in the same applications, such as SQL and Python. Integrating a new computing system into production environments demands high technical effort to align with existing production environments in terms of I/O, failover, etc. Data could be polymorphic. Non-relational data, such as tensors, dataframes (in Pandas) and graphs/networks (in GraphScope) are becoming increasingly prevalent. Tables and SQL may not be the best way to store, exchange, or process them. Transforming the data back and forth between different systems as tables could result in a significant overhead. Saving/loading the data to/from the external storage requires numerous memory copies and incurs high IO costs. Vineyard (v6d) is an in-memory immutable data manager that offers out-of-the-box high-level abstraction and zero-copy sharing for distributed data in big data tasks, such as graph analytics (e.g., GraphScope), numerical computing (e.g., Mars), and machine learning. Vineyard shares immutable data across different systems using shared memory without extra overheads, eliminating the overhead of serialization/deserialization and IO when exchanging immutable data between systems. Vineyard defines a metadata-payload separated data model to capture the payload commonalities and method commonalities between sharable objects in different programming languages and different computing systems in a unified way. The Code Generation for Boilerplate (Vineyard Component Description Language) is specifically designed to annotate sharable members and methods, enabling automatic generation of boilerplate code for minimal integration effort. In many big data analytical tasks, a substantial portion of the workload consists of boilerplate routines that are unrelated to the core computation. These routines include various IO adapters, data partition strategies, and migration jobs. Due to different data structure abstractions across systems, these routines are often not easily reusable, leading to increased complexity and redundancy. Vineyard provides common manipulation routines for immutable data as drivers, which extend the capabilities of data structures by registering appropriate drivers. This enables out-of-the-box reuse of boilerplate components across diverse computation jobs. Vineyard provides efficient distributed data sharing in cloud-native environments by embracing cloud-native big data processing. Kubernetes helps Vineyard leverage the scale-in/out and scheduling abilities of Kubernetes. Object manager Put and get arbitrary objects using Vineyard, in a zero-copy way! Cross-system sharing Share large objects across computing systems. Data orchestration Vineyard coordinates the flow of objects and jobs on Kubernetes based on data-aware scheduling. User Guides Get started with Vineyard. Deploy on Kubernetes Deploy Vineyard on Kubernetes and accelerate big-data analytical workflows on cloud-native infrastructures. Tutorials Explore use cases and tutorials where Vineyard can bring added value. Getting Involved Get involved and become part of the Vineyard community. FAQ Frequently asked questions and discussions during the adoption of Vineyard. Wenyuan Yu, Tao He, Lei Wang, Ke Meng, Ye Cao, Diwen Zhu, Sanhong Li, Jingren Zhou. Vineyard: Optimizing Data Sharing in Data-Intensive Analytics. ACM SIG Conference on Management of Data (SIGMOD), industry, 2023. . Vineyard is a CNCF sandbox project and is made successful by its community. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "containerd overview Welcome to the containerd documentation! This document contains some basic project-level information about containerd. If youd like to get started running containerd locally on your machine, see the Getting started guide. See also other docs: https://github.com/containerd/containerd/tree/main/docs The containerd project is encapsulated in a variety of GitHub repositories. See https://github.com/containerd . Table of contents containerd Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": "docs.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "containerd overview Welcome to the containerd documentation! This document contains some basic project-level information about containerd. If youd like to get started running containerd locally on your machine, see the Getting started guide. See also other docs: https://github.com/containerd/containerd/tree/main/docs The containerd project is encapsulated in a variety of GitHub repositories. See https://github.com/containerd . Table of contents containerd Authors 2024 | Documentation Distributed under CC-BY-4.0 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This is the documentation for the latest development version of Velero. Both code and docs may be unstable, and these docs are not guaranteed to be up to date or correct. See the latest version. Having a high level design document with the proposed change and the impacts helps the maintainers evaluate if a major change should be incorporated. To make a design pull request, you can copy the template found in the design/_template.md file into a new Markdown file. You may join the Velero community and contribute in many different ways, including helping us design or test new features. For any significant feature we consider adding, we start with a design document. You may find a list of in progress new designs here: https://github.com/vmware-tanzu/velero/pulls?q=is%3Aopen+is%3Apr+label%3ADesign. Feel free to review and help us with your input. You can also vote on issues using :+1: and :-1:, as explained in our Feature enhancement request and Bug issue templates. This will help us quantify importance and prioritize issues. For information on how to connect with our maintainers and community, join our online meetings, or find good first issues, start on our Velero community page. Please browse our list of resources, including a playlist of past online community meetings, blog posts, and other resources to help you get familiar with our project: Velero resources. If you are ready to jump in and test, add code, or help with documentation, please use the navigation on the left under Contribute. To help you get started, see the documentation." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "Edit this page Create issue gVisor is designed to provide a secure, virtualized environment while preserving key benefits of containerization, such as small fixed overheads and a dynamic resource footprint. For containerized infrastructure, this can provide a turn-key solution for sandboxing untrusted workloads: there are no changes to the fundamental resource model. gVisor imposes runtime costs over native containers. These costs come in two forms: additional cycles and memory usage, which may manifest as increased latency, reduced throughput or density, or not at all. In general, these costs come from two different sources. First, the existence of the Sentry means that additional memory will be required, and application system calls must traverse additional layers of software. The design emphasizes security and therefore we chose to use a language for the Sentry that provides benefits in this domain but may not yet offer the raw performance of other choices. Costs imposed by these design choices are structural costs. Second, as gVisor is an independent implementation of the system call surface, many of the subsystems or specific calls are not as optimized as more mature implementations. A good example here is the network stack, which is continuing to evolve but does not support all the advanced recovery mechanisms offered by other stacks and is less CPU efficient. This is an implementation cost and is distinct from structural costs. Improvements here are ongoing and driven by the workloads that matter to gVisor users and contributors. This page provides a guide for understanding baseline performance, and calls out distinct structural costs and implementation costs, highlighting where improvements are possible and not possible. While we include a variety of workloads here, its worth emphasizing that gVisor may not be an appropriate solution for every workload, for reasons other than performance. For example, a sandbox may provide minimal benefit for a trusted database, since user data would already be inside the sandbox and there is no need for an attacker to break out in the first place. All data below was generated using the benchmark tools repository, and the machines under test are uniform Google Compute Engine Virtual Machines (VMs) with the following specifications: ``` Machine type: n1-standard-4 (broadwell) Image: Debian GNU/Linux 9 (stretch) 4.19.0-0 BootDisk: 2048GB SSD persistent disk ``` Through this document, runsc is used to indicate the runtime provided by gVisor. When relevant, we use the name runsc-platform to describe a specific platform choice. Except where specified, all tests below are conducted with the ptrace platform. The ptrace platform works everywhere and does not require hardware virtualization or kernel modifications but suffers from the highest structural costs by far. This platform is used to provide a clear understanding of the performance model, but in no way represents an ideal scenario; users should use Systrap for best performance in most cases. In the future, this guide will be extended to bare metal environments and include additional platforms. gVisor does not introduce any additional costs with respect to raw memory accesses. Page faults and other Operating System (OS) mechanisms are translated through the Sentry, but once mappings are installed and available to the application, there is no additional overhead. The above figure demonstrates the memory transfer rate as measured by sysbench. The Sentry provides an additional layer of indirection, and it requires memory in order to store state associated with the application. This memory generally consists of a fixed component, plus an amount that varies with the usage of operating system resources (e.g. how many sockets or files are" }, { "data": "For many use cases, fixed memory overheads are a primary concern. This may be because sandboxed containers handle a low volume of requests, and it is therefore important to achieve high densities for efficiency. The above figure demonstrates these costs based on three sample applications. This test is the result of running many instances of a container (50, or 5 in the case of redis) and calculating available memory on the host before and afterwards, and dividing the difference by the number of containers. This technique is used for measuring memory usage over the usageinbytes value of the container cgroup because we found that some container runtimes, other than runc and runsc, do not use an individual container cgroup. The first application is an instance of sleep: a trivial application that does nothing. The second application is a synthetic node application which imports a number of modules and listens for requests. The third application is a similar synthetic ruby application which does the same. Finally, we include an instance of redis storing approximately 1GB of data. In all cases, the sandbox itself is responsible for a small, mostly fixed amount of memory overhead. gVisor does not perform emulation or otherwise interfere with the raw execution of CPU instructions by the application. Therefore, there is no runtime cost imposed for CPU operations. The above figure demonstrates the sysbench measurement of CPU events per second. Events per second is based on a CPU-bound loop that calculates all prime numbers in a specified range. We note that runsc does not impose a performance penalty, as the code is executing natively in both cases. This has important consequences for classes of workloads that are often CPU-bound, such as data processing or machine learning. In these cases, runsc will similarly impose minimal runtime overhead. For example, the above figure shows a sample TensorFlow workload, the convolutional neural network example. The time indicated includes the full start-up and run time for the workload, which trains a model. Some structural costs of gVisor are heavily influenced by the platform choice, which implements system call interception. Today, gVisor supports a variety of platforms. These platforms present distinct performance, compatibility and security trade-offs. For example, the KVM platform has low overhead system call interception but runs poorly with nested virtualization. The above figure demonstrates the time required for a raw system call on various platforms. The test is implemented by a custom binary which performs a large number of system calls and calculates the average time required. This cost will principally impact applications that are system call bound, which tend to be high-performance data stores and static network services. In general, the impact of system call interception will be lower the more work an application does. For example, redis is an application that performs relatively little work in userspace: in general it reads from a connected socket, reads or modifies some data, and writes a result back to the socket. The above figure shows the results of running comprehensive set of benchmarks. We can see that small operations impose a large overhead, while larger operations, such as LRANGE, where more work is done in the application, have a smaller relative overhead. Some of these costs above are structural costs, and redis is likely to remain a challenging performance scenario. However, optimizing the platform will also have a dramatic impact. For many use cases, the ability to spin-up containers quickly and efficiently is" }, { "data": "A sandbox may be short-lived and perform minimal user work (e.g. a function invocation). The above figure indicates how total time required to start a container through Docker. This benchmark uses three different applications. First, an alpine Linux-container that executes true. Second, a node application that loads a number of modules and binds an HTTP server. The time is measured by a successful request to the bound port. Finally, a ruby application that similarly loads a number of modules and binds an HTTP server. Note: most of the time overhead above is associated Docker itself. This is evident with the empty runc benchmark. To avoid these costs with runsc, you may also consider using runsc do mode or invoking the OCI runtime directly. Networking is mostly bound by implementation costs, and gVisors network stack is improving quickly. While typically not an important metric in practice for common sandbox use cases, nevertheless iperf is a common microbenchmark used to measure raw throughput. The above figure shows the result of an iperf test between two instances. For the upload case, the specified runtime is used for the iperf client, and in the download case, the specified runtime is the server. A native runtime is always used for the other endpoint in the test. The above figure shows the result of simple node and ruby web services that render a template upon receiving a request. Because these synthetic benchmarks do minimal work per request, much like the redis case, they suffer from high overheads. In practice, the more work an application does the smaller the impact of structural costs become. Some aspects of file system performance are also reflective of implementation costs, and an area where gVisors implementation is improving quickly. In terms of raw disk I/O, gVisor does not introduce significant fundamental overhead. For general file operations, gVisor introduces a small fixed overhead for data that transitions across the sandbox boundary. This manifests as structural costs in some cases, since these operations must be routed through the Gofer as a result of our Security Model, but in most cases are dominated by implementation costs, due to an internal Virtual File System (VFS) implementation that needs improvement. The above figures demonstrate the results of fio for reads and writes to and from the disk. In this case, the disk quickly becomes the bottleneck and dominates other costs. The above figure shows the raw I/O performance of using a tmpfs mount which is sandbox-internal in the case of runsc. Generally these operations are similarly bound to the cost of copying around data in-memory, and we dont see the cost of VFS operations. The high costs of VFS operations can manifest in benchmarks that execute many such operations in the hot path for serving requests, for example. The above figure shows the result of using gVisor to serve small pieces of static content with predictably poor results. This workload represents apache serving a single file sized 100k from the container image to a client running ApacheBench with varying levels of concurrency. The high overhead comes principally from the VFS implementation that needs improvement, with several internal serialization points (since all requests are reading the same file). Note that some of some of network stack performance issues also impact this benchmark. For benchmarks that are bound by raw disk I/O and a mix of compute, file system operations are less of an issue. The above figure shows the total time required for an ffmpeg container to start, load and transcode a 27MB input video. About Support Connect 2024 The gVisor Authors" } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "Registry, the open source implementation for storing and distributing container images and other content, has been donated to the CNCF. Registry now goes under the name of Distribution, and the documentation has moved to distribution/distribution. The Docker Hub registry implementation is based on Distribution. Docker Hub implements version 1.0.1 OCI distribution specification. For reference documentation on the API protocol that Docker Hub implements, refer to the OCI distribution specification. Docker Hub supports the following image manifest formats for pulling images: You can push images with the following formats: Docker Hub also supports OCI artifacts. See OCI artifacts. For documentation related to authentication to the Docker Hub registry, see: Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "Runtime", "file_name": "docs.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "Edit this page Create issue gVisor adds a layer of security to your AI/ML applications or other CUDA workloads while adding negligible overhead. By running these applications in a sandboxed environment, you can isolate your host system from potential vulnerabilities in AI code. This is crucial for handling sensitive data or deploying untrusted AI workloads. gVisor supports running most CUDA applications on preselected versions of NVIDIAs open source driver. To achieve this, gVisor implements a proxy driver inside the sandbox, henceforth referred to as nvproxy. nvproxy proxies the applications interactions with NVIDIAs driver on the host. It provides access to NVIDIA GPU-specific devices to the sandboxed application. The CUDA application can run unmodified inside the sandbox and interact transparently with these devices. The runsc flag --nvproxy must be specified to enable GPU support. gVisor supports GPUs in the following environments. The nvidia-container-runtime is packaged as part of the NVIDIA GPU Container Stack. This runtime is just a shim and delegates all commands to the configured low level runtime (which defaults to runc). To use gVisor, specify runsc as the low level runtime in /etc/nvidia-container-runtime/config.toml via the runtimes option and then run CUDA containers with nvidia-container-runtime. The runtimes option allows to specify an executable path or executable name that is searchable in $PATH. To specify runsc with specific flags, the following executable can be used: ``` exec /path/to/runsc --nvproxy <other runsc flags> \"$@\" ``` NOTE: gVisor currently only supports legacy mode. The alternative, csv mode, is not yet supported. The legacy mode of nvidia-container-runtime is directly compatible with the --gpus flag implemented by the docker CLI. So with Docker, runsc can be used directly (without having to go through nvidia-container-runtime). ``` $ docker run --runtime=runsc --gpus=all --rm -it nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubi8 [Vector addition of 50000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads Copy output data from the CUDA device to the host memory Test PASSED Done ``` GKE uses a different GPU container stack than NVIDIAs. GKE has its own device plugin (which is different from k8s-device-plugin). GKEs plugin modifies the container spec in a different way than the above-mentioned methods. NOTE: nvproxy does not have integration support for k8s-device-plugin yet. So k8s environments other than GKE might not be supported. gVisor supports a wide range of CUDA workloads, including PyTorch and various generative models like LLMs. Check out this blog post about running Stable Diffusion with gVisor. gVisor undergoes continuous tests to ensure this functionality remains robust. Real-world usage of gVisor across different CUDA workloads helps discover and address potential compatibility or performance issues in nvproxy. nvproxy is a passthrough driver that forwards ioctl(2) calls made to NVIDIA devices by the containerized application directly to the host NVIDIA driver. This forwarding is straightforward: ioctl parameters are copied from the applications address space to the sentrys address space, and then a host ioctl syscall is made. ioctls are passed through with minimal intervention; nvproxy does not emulate NVIDIA kernel-mode driver (KMD) logic. This design translates to minimal overhead for GPU operations, ensuring that GPU bound workloads experience negligible performance" }, { "data": "However, the presence of pointers and file descriptors within some ioctl structs forces nvproxy to perform appropriate translations. This requires nvproxy to be aware of the KMDs ABI, specifically the layout of ioctl structs. The challenge is compounded by the lack of ABI stability guarantees in NVIDIAs KMD, meaning ioctl definitions can change arbitrarily between releases. While the NVIDIA installer ensures matching KMD and user-mode driver (UMD) component versions, a single gVisor version might be used with multiple NVIDIA drivers. As a result, nvproxy must understand the ABI for each supported driver version, necessitating internal versioning logic for ioctls. As a result, nvproxy has the following limitations: gVisor currently supports NVIDIA GPUs: T4, L4, A100, A10G and H100. Please open a GitHub issue if you want support for another GPU model. The range of driver versions supported by nvproxy directly aligns with those available within GKE. As GKE incorporates newer drivers, nvproxy will extend support accordingly. Conversely, to manage versioning complexity, nvproxy will drop support for drivers removed from GKE. This strategy ensures a streamlined process and avoids unbounded growth in nvproxys versioning. To see what drivers a given runsc version supports, run: ``` $ runsc nvproxy list-supported-drivers ``` gVisor only exposes /dev/nvidiactl, /dev/nvidia-uvm and /dev/nvidia#. Some unsupported NVIDIA device files are: To minimize maintenance overhead across supported driver versions, the set of supported NVIDIA device ioctls is intentionally limited. This set was generated by running a large number of CUDA workloads in gVisor. As nvproxy is adapted to more use cases, this set will continue to evolve. Currently, nvproxy focuses on supporting compute workloads (like CUDA). Graphics and video capabilities are not yet supported due to missing ioctls. If your GPU compute workload fails with gVisor, please note that some ioctl commands might still be unimplemented. Please open a GitHub issue to describe about your use case. If a missing ioctl implementation is the problem, then the debug logs will contain warnings with prefix nvproxy: unknown *. While CUDA support enables important use cases for gVisor, it is important for users to understand the security model around the use of GPUs in sandboxes. In short, while gVisor will protect the host from the sandboxed application, NVIDIA driver updates must be part of any security plan with or without gVisor. First, a short discussion on gvisors security model. gVisor protects the host from sandboxed applications by providing several layers of defense. The layers most relevant to this discussion are the redirection of application syscalls to the gVisor sandbox and use of seccomp-bpf on gVisor sandboxes. gVisor uses a platform to tell the host kernel to reroute system calls to the sandbox process, known as the sentry. The sentry implements a syscall table, which services all application syscalls. The Sentry may make syscalls to the host kernel if it needs them to fulfill the application syscall, but it doesnt merely pass an application syscall to the host kernel. On sandbox boot, seccomp filters are applied to the sandbox. Seccomp filters applied to the sandbox constrain the set of syscalls that it can make to the host kernel, blocking access to most host kernel vulnerabilities even if the sandbox becomes" }, { "data": "For example, CVE-2022-0185 is mitigated because gVisor itself handles the syscalls required to use namespaces and capabilities, so the application is using gVisors implementation, not the host kernels. For a compromised sandbox, the syscalls required to exploit the vulnerability are blocked by seccomp filters. In addition, seccomp-bpf filters can filter by argument names allowing us to allowlist granularly by ioctl(2) arguments. ioctl(2) is a source of many bugs in any kernel due to the complexity of its implementation. As of writing, gVisor does allowlist some ioctls by argument for things like terminal support. For example, CVE-2024-21626 is mitigated by gVisor because the application would use gVisors implementation of ioctl(2). For a compromised sentry, ioctl(2) calls with the needed arguments are not in the seccomp filter allowlist, blocking the attacker from making the call. gVisor also mitigates similar vulnerabilities that come with device drivers (CVE-2023-33107). Recall that nvproxy allows applications to directly interact with supported ioctls defined in the NVIDIA driver. gVisors seccomp filter rules are modified such that ioctl(2) calls can be made only for supported ioctls. The allowlisted rules aligned with each driver version. This approach is similar to the allowlisted ioctls for terminal support described above. This allows gVisor to retain the vast majority of its protection for the host while allowing access to GPUs. All of the above CVEs remain mitigated even when nvproxy is used. However, gVisor is much less effective at mitigating vulnerabilities within the NVIDIA GPU drivers themselves, because gVisor passes through calls to be handled by the kernel module. If there is a vulnerability in a given driver for a given GPU ioctl (read feature) that gVisor passes through, then gVisor will also be vulnerable. If the vulnerability is in an unimplemented feature, gVisor will block the required calls with seccomp filters. In addition, gVisor doesnt introduce any additional hardware-level isolation beyond that which is configured by by the NVIDIA kernel-mode driver. There is no validation of things like DMA buffers. The only checks are done in seccomp-bpf rules to ensure ioctl(2) calls are made on supported and allowlisted ioctls. Therefore, it is imperative that users update NVIDIA drivers in a timely manner with or without gVisor. To see the latest drivers gVisor supports, you can run the following with your runsc release: ``` $ runsc nvproxy list-supported-drivers ``` Alternatively you can view the source code or download it and run: ``` $ make run TARGETS=runsc:runsc ARGS=\"nvproxy list-supported-drivers\" ``` While gVisor doesnt protect against all NVIDIA driver vulnerabilities, it does protect against a large set of general vulnerabilities in Linux. Applications dont just use GPUs, they use them as a part of a larger application that may include third party libraries. For example, Tensorflow suffers from the same kind of vulnerabilities that every application does. Designing and implementing an application with security in mind is hard and in the emerging AI space, security is often overlooked in favor of getting to market fast. There are also many services that allow users to run external users code on the vendors infrastructure. gVisor is well suited as part of a larger security plan for these and other use cases. About Support Connect 2024 The gVisor Authors" } ]
{ "category": "Runtime", "file_name": "faq.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "Edit this page Create issue Today, gVisor requires Linux. gVisor currently supports x86_64/AMD64 compatible processors. Preliminary support is also available for ARM64. No. gVisor is capable of running unmodified Linux binaries. gVisor supports Linux ELF binaries. Binaries run in gVisor should be built for the AMD64 or AArch64 CPU architectures. Yes. Please see the Docker Quick Start. Yes. Please see the Kubernetes Quick Start. See the Production guide. See the Security Model. If youre having problems running a container with runsc its most likely due to a compatibility issue or a missing feature in gVisor. See Debugging. You are using an older version of Linux which doesnt support memfd_create. This is tracked in bug #268. Youre using an old version of Docker. See Docker Quick Start. For performance reasons, gVisor caches directory contents, and therefore it may not realize a new file was copied to a given directory. To invalidate the cache and force a refresh, create a file under the directory in question and list the contents again. As a workaround, shared root filesystem can be enabled. See Filesystem. This bug is tracked in bug #4. Note that kubectl cp works because it does the copy by execing inside the sandbox, and thus gVisors internal cache is made aware of the new files and directories. Make sure that permissions is correct on the runsc binary. ``` sudo chmod a+rx /usr/local/bin/runsc ``` If your Kernel is configured with YAMA LSM (see https://www.kernel.org/doc/Documentation/security/Yama.txt and https://man7.org/linux/man-pages/man2/ptrace.2.html) gVisor may fail in certain modes (i.e., systrap and/or directfs) with this error if /proc/sys/kernel/yama/ptrace_scope is set to 2. If this is the case, try setting /proc/sys/kernel/yama/ptrace_scope to max of mode 1: ``` sudo cat /proc/sys/kernel/yama/ptrace_scope 2 sudo bash -c 'echo 1 > /proc/sys/kernel/yama/ptrace_scope' ``` There is a bug in Linux kernel versions 5.1 to 5.3.15, 5.4.2, and 5.5. Upgrade to a newer kernel or add the following to /lib/systemd/system/containerd.service as a workaround. ``` LimitMEMLOCK=infinity ``` And run systemctl daemon-reload && systemctl restart containerd to restart containerd. See issue #1765 for more details. This error indicates that the Kubernetes CRI runtime was not set up to handle runsc as a runtime handler. Please ensure that containerd configuration has been created properly and containerd has been restarted. See the containerd quick start for more details. If you have ensured that containerd has been set up properly and you used kubeadm to create your cluster please check if Docker is also installed on that system. Kubeadm prefers using Docker if both Docker and containerd are installed. Please recreate your cluster and set the --cri-socket option on kubeadm commands. For example: ``` kubeadm init --cri-socket=/var/run/containerd/containerd.sock ... ``` To fix an existing cluster edit the /var/lib/kubelet/kubeadm-flags.env file and set the --container-runtime flag to remote and set the --container-runtime-endpoint flag to point to the containerd socket. e.g. /var/run/containerd/containerd.sock. This is normally indicated by errors like bad address 'container-name' when trying to communicate to another container in the same network. Docker user defined bridge uses an embedded DNS server bound to the loopback interface on address 127.0.0.10. This requires access to the host network in order to communicate to the DNS server. runsc network is isolated from the host and cannot access the DNS server on the host network without breaking the sandbox isolation. There are a few different workarounds you can try: This error may happen when using gvisor-containerd-shim with a containerd that does not contain the fix for CVE-2020-15257. The resolve the issue, update containerd to 1.3.9 or 1.4.3 (or newer versions respectively). About Support Connect 2024 The gVisor Authors" } ]
{ "category": "Runtime", "file_name": "docs.github.com.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "github-privacy-statement.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "github-terms-of-service.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "Runtime", "file_name": "dimensions-define.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "Slack Join our community channel on Slack WeChat Scan our group chat QR code to join Weekly Meeting Technical Seminar Weekly Meeting GitHub Official repository on GitHub Gitee Official repository on Gitee Ecosystem is used to describe the health status of open source community standing from ecology context. We create a three-dimensional space for the evaluation system, including the open source ecosystem, 'collaboration, people, software' and evaluation models. Ecosystem is used to describe the health status of open source community standing from ecology context. We create a three-dimensional space for the evaluation system, including the open source ecosystem, \"collaboration, people, software\" and evaluation models. 3 items 2 items 2 items 4 items Copyright 2023 OSS compass. All Rights Reserved." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Krustlet", "subcategory": "Container Runtime" }
[ { "data": "Node affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods. Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also evaluates other parameters as part of its function. Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints. You add a taint to a node using kubectl taint. For example, ``` kubectl taint nodes node1 key1=value1:NoSchedule ``` places a taint on node node1. The taint has key key1, value value1, and taint effect NoSchedule. This means that no pod will be able to schedule onto node1 unless it has a matching toleration. To remove the taint added by the command above, you can run: ``` kubectl taint nodes node1 key1=value1:NoSchedule- ``` You specify a toleration for a pod in the PodSpec. Both of the following tolerations \"match\" the taint created by the kubectl taint line above, and thus a pod with either toleration would be able to schedule onto node1: ``` tolerations: key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" ``` ``` tolerations: key: \"key1\" operator: \"Exists\" effect: \"NoSchedule\" ``` The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action bypasses the scheduler; the Pod is then bound onto the node where you assigned it, even if there are NoSchedule taints on that node that you selected. If this happens and the node also has a NoExecute taint set, the kubelet will eject the Pod unless there is an appropriate tolerance set. Here's an example of a pod that has some tolerations defined: ``` apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: name: nginx image: nginx imagePullPolicy: IfNotPresent tolerations: key: \"example-key\" operator: \"Exists\" effect: \"NoSchedule\" ``` The default value for operator is Equal. A toleration \"matches\" a taint if the keys are the same and the effects are the same, and: There are two special cases: An empty key with operator Exists matches all keys, values and effects which means this will tolerate everything. An empty effect matches all effects with key key1. The above example used the effect of NoSchedule. Alternatively, you can use the effect of PreferNoSchedule. The allowed values for the effect field are: You can put multiple taints on the same node and multiple tolerations on the same pod. The way Kubernetes processes multiple taints and tolerations is like a filter: start with all of a node's taints, then ignore the ones for which the pod has a matching toleration; the remaining un-ignored taints have the indicated effects on the" }, { "data": "In particular, For example, imagine you taint a node like this ``` kubectl taint nodes node1 key1=value1:NoSchedule kubectl taint nodes node1 key1=value1:NoExecute kubectl taint nodes node1 key2=value2:NoSchedule ``` And a pod has two tolerations: ``` tolerations: key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" ``` In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. Normally, if a taint with effect NoExecute is added to a node, then any pods that do not tolerate the taint will be evicted immediately, and pods that do tolerate the taint will never be evicted. However, a toleration with NoExecute effect can specify an optional tolerationSeconds field that dictates how long the pod will stay bound to the node after the taint is added. For example, ``` tolerations: key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 ``` means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, the pod will not be evicted. Taints and tolerations are a flexible way to steer pods away from nodes or evict pods that shouldn't be running. A few of the use cases are Dedicated Nodes: If you want to dedicate a set of nodes for exclusive use by a particular set of users, you can add a taint to those nodes (say, kubectl taint nodes nodename dedicated=groupName:NoSchedule) and then add a corresponding toleration to their pods (this would be done most easily by writing a custom admission controller). The pods with the tolerations will then be allowed to use the tainted (dedicated) nodes as well as any other nodes in the cluster. If you want to dedicate the nodes to them and ensure they only use the dedicated nodes, then you should additionally add a label similar to the taint to the same set of nodes (e.g. dedicated=groupName), and the admission controller should additionally add a node affinity to require that the pods can only schedule onto nodes labeled with dedicated=groupName. Nodes with Special Hardware: In a cluster where a small subset of nodes have specialized hardware (for example GPUs), it is desirable to keep pods that don't need the specialized hardware off of those nodes, thus leaving room for later-arriving pods that do need the specialized hardware. This can be done by tainting the nodes that have the specialized hardware (e.g. kubectl taint nodes nodename special=true:NoSchedule or kubectl taint nodes nodename special=true:PreferNoSchedule) and adding a corresponding toleration to pods that use the special hardware. As in the dedicated nodes use case, it is probably easiest to apply the tolerations using a custom admission controller. For example, it is recommended to use Extended Resources to represent the special hardware, taint your special hardware nodes with the extended resource name and run the ExtendedResourceToleration admission controller. Now, because the nodes are tainted, no pods without the toleration will schedule on them. But when you submit a pod that requests the extended resource, the ExtendedResourceToleration admission controller will automatically add the correct toleration to the pod and that pod will schedule on the special hardware" }, { "data": "This will make sure that these special hardware nodes are dedicated for pods requesting such hardware and you don't have to manually add tolerations to your pods. Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section. The node controller automatically taints a Node when certain conditions are true. The following taints are built in: In case a node is to be drained, the node controller or the kubelet adds relevant taints with NoExecute effect. This effect is added by default for the node.kubernetes.io/not-ready and node.kubernetes.io/unreachable taints. If the fault condition returns to normal, the kubelet or node controller can remove the relevant taint(s). In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node. You can specify tolerationSeconds for a Pod to define how long that Pod stays bound to a failing or unresponsive Node. For example, you might want to keep an application with a lot of local state bound to node for a long time in the event of network partition, hoping that the partition will recover and thus the pod eviction can be avoided. The toleration you set for that Pod might look like: ``` tolerations: key: \"node.kubernetes.io/unreachable\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 ``` Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300, unless you, or a controller, set those tolerations explicitly. These automatically-added tolerations mean that Pods remain bound to Nodes for 5 minutes after one of these problems is detected. DaemonSet pods are created with NoExecute tolerations for the following taints with no tolerationSeconds: This ensures that DaemonSet pods are never evicted due to these problems. The control plane, using the node controller, automatically creates taints with a NoSchedule effect for node conditions. The scheduler checks taints, not node conditions, when it makes scheduling decisions. This ensures that node conditions don't directly affect scheduling. For example, if the DiskPressure node condition is active, the control plane adds the node.kubernetes.io/disk-pressure taint and does not schedule new pods onto the affected node. If the MemoryPressure node condition is active, the control plane adds the node.kubernetes.io/memory-pressure taint. You can ignore node conditions for newly created pods by adding the corresponding Pod tolerations. The control plane also adds the node.kubernetes.io/memory-pressure toleration on pods that have a QoS class other than BestEffort. This is because Kubernetes treats pods in the Guaranteed or Burstable QoS classes (even pods with no memory request set) as if they are able to cope with memory pressure, while new BestEffort pods are not scheduled onto the affected node. The DaemonSet controller automatically adds the following NoSchedule tolerations to all daemons, to prevent DaemonSets from breaking. Adding these tolerations ensures backward compatibility. You can also add arbitrary tolerations to DaemonSets. Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement." } ]
{ "category": "Runtime", "file_name": ".md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "Learn how NVIDIA is using Kata Containers to support AI/ML workloads! Start here Understand the basics, contribute to and try using Kata Containers. Installation Guides : Install and run Kata Containers with Docker or Kubernetes Upgrading : How to upgrade from Clear Containers and runV to Kata Containers and how to upgrade an existing Kata Containers system to the latest version. Limitations : Differences and limitations compared with the default Docker runtime, runc. How to : Kata Containers and containerd with Kubernetes. How to : OpenStack Zun with Kata Containers. How to : Kata Containers with Firecracker. Design and Implementations How to Contribute Kata Containers is an independent open source community collaboratively developing code under the Apache 2 license. The project is supported by the Open Infrastructure Foundation; the community follows the OpenInfra Foundation Code of Conduct." } ]